in reply to Removing duplicate records in text file

 cat file.txt | uniq > file2.txt ok granted it's not perl.. but it is fast.

Replies are listed 'Best First'.
Re: Re: Removing duplicate records in text file
by pzbagel (Chaplain) on Aug 03, 2003 at 19:03 UTC

    You also need to sort the lines in case the duplicate records are not next to each other.

    sort file.txt | uniq > file2.txt

    Also, being a Unix-y solution, you may be leaving Windows users out in the cold unless they have cygwin or something similar installed.

      Or, if your sort supports it:
      sort -u file.txt > file2.txt