in reply to Replacing all but one duplicate lines

open INFILE, "<" . $filename or die "Couldn't open file " . $filename . " for reading : $!"; open OUTFILE, ">" . $filename or die "Couldn't open file " . $filename . " for writing : $!"; while (<INFILE>){ print OUTFILE $_; last if /$heading_to_print/; } while (<INFILE>){ print OUTFILE $_ unless /$heading_to_print/; } close INFILE; close OUTFILE;
A little tip is in your error messages for open() and similar you shuld include the output of $!. Also personally I dont like using global filehandles. I prefer to use lexical ones, but this only works on later versions of Perl.

HTH

UPDATE

This will remove any duplicate lines except the first, not just that of a specific line. (Although it might chew up a lot of memory if there are only a few dupes but many lines.)

open INFILE, "<" . $filename or die "Couldn't open file " . $filename . " for reading : $!"; open OUTFILE, ">" . $filename or die "Couldn't open file " . $filename . " for writing : $!"; my %hash; while (<INFILE>){ print OUTFILE $_ unless $hash{$_}++; } close INFILE; close OUTFILE;
This could also be done as a one liner,
perl -ni.old -e "print $_ unless $_{$_}++" foo.txt
Which really is quite nice isnt it. :-) (Note that the file will be "edited-in-place", and the original will be backed up as "foo.txt.old")

Yves / DeMerphq
---
Writing a good benchmark isnt as easy as it might look.