Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re: Sorting Unique File Entries

by l3nz (Friar)
on Nov 24, 2003 at 13:10 UTC ( [id://309516]=note: print w/replies, xml ) Need Help??


in reply to Sorting Unique File Entries

This is a very simple pure Perl solution that will work also on Win32 machines; it is based on an associative array in order to kill duplicate entries. What's nice with it is that if you add to the printing loop $hDat{$d} you get to know how many times a line was repeated.
use strict; my %hDat; my $d; map( $hDat{$_}++, <DATA> ); foreach $d (sort keys %hDat) { print $d; } __DATA__ a b c dd c aa zzzz q r
You could also use a similar approach to perform a case-insensitive duplicate line removal that returns the last instance of the duplicate line in the full majesty of its original case:
use strict; my %hDat; my $d; map( ($hDat{lc $_} = $_), <DATA> ); foreach $d (sort keys %hDat) { print $hDat{$d}; } __DATA__ a Bongo c BoNgo dd c A zzzz q r

Replies are listed 'Best First'.
Re: Re: Sorting Unique File Entries
by l3nz (Friar) on Nov 24, 2003 at 17:42 UTC
    This is a somehow shorter version using the same technique but no named temporary array (it's more an exercise in concision than actually useful).
    use strict; foreach my $d (sort keys %{{ map( ($_ => 1), <DATA>) }} ) { print $d; } __DATA__ a B a B dd
    I wonder if there's a cleaner way to write the mapped expression.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://309516]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (5)
As of 2024-03-28 08:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found