Are the records that are duplicate exactly the same? Do you care about the order of the records when you print them out again? You can use a hash to speed things up:
my %uniq=(); open(LIST, 'file.txt') or die "Error 1"; while(<LIST>) { chomp; $uniq{$_}=1; } close(LIST); #the %uniq hash keys now contain all the records with duplicates remov +ed
HTH
In reply to Re: Removing duplicate records in text file
by pzbagel
in thread Removing duplicate records in text file
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |