in reply to Removing duplicate records in text file

Are the records that are duplicate exactly the same? Do you care about the order of the records when you print them out again? You can use a hash to speed things up:

my %uniq=(); open(LIST, 'file.txt') or die "Error 1"; while(<LIST>) { chomp; $uniq{$_}=1; } close(LIST); #the %uniq hash keys now contain all the records with duplicates remov +ed

HTH