Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
Any ideas on how to make this faster?@clean = (); @check = (); open(LIST, 'file.txt') or die "Error 1"; while(<LIST>) { chomp $_; $count = 0; foreach $i (@check) { chomp $i; if($_ eq $i) { $count++; } } if($count >= 1) { next; } else { push(@clean,$_); push(@check,$_); } } close(LIST);
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Removing duplicate records in text file
by valdez (Monsignor) on Aug 03, 2003 at 18:59 UTC | |
|
Re: Removing duplicate records in text file
by pzbagel (Chaplain) on Aug 03, 2003 at 18:56 UTC | |
|
Re: Removing duplicate records in text file
by blue_cowdawg (Monsignor) on Aug 03, 2003 at 19:29 UTC | |
|
Re: Removing duplicate records in text file
by BUU (Prior) on Aug 03, 2003 at 18:54 UTC | |
by pzbagel (Chaplain) on Aug 03, 2003 at 19:03 UTC | |
by ctilmes (Vicar) on Aug 03, 2003 at 21:52 UTC |