in reply to Sorting Unique File Entries

This is a very simple pure Perl solution that will work also on Win32 machines; it is based on an associative array in order to kill duplicate entries. What's nice with it is that if you add to the printing loop $hDat{$d} you get to know how many times a line was repeated.
use strict; my %hDat; my $d; map( $hDat{$_}++, <DATA> ); foreach $d (sort keys %hDat) { print $d; } __DATA__ a b c dd c aa zzzz q r
You could also use a similar approach to perform a case-insensitive duplicate line removal that returns the last instance of the duplicate line in the full majesty of its original case:
use strict; my %hDat; my $d; map( ($hDat{lc $_} = $_), <DATA> ); foreach $d (sort keys %hDat) { print $hDat{$d}; } __DATA__ a Bongo c BoNgo dd c A zzzz q r

Replies are listed 'Best First'.
Re: Re: Sorting Unique File Entries
by l3nz (Friar) on Nov 24, 2003 at 17:42 UTC
    This is a somehow shorter version using the same technique but no named temporary array (it's more an exercise in concision than actually useful).
    use strict; foreach my $d (sort keys %{{ map( ($_ => 1), <DATA>) }} ) { print $d; } __DATA__ a B a B dd
    I wonder if there's a cleaner way to write the mapped expression.