in reply to Checking LInes in Text File

It would help me to know what you are trying to do. The solutions offered thus far -- "sort -u" and using a hash -- both assume that you want to eliminate a line if it has a duplicate anywhere in the file. If all you want to do is eliminate successive repeated lines, something like this might be better:
my $last = $_ = <>; print; while (<>) { print if ($_ ne $last); $last = $_; }

Replies are listed 'Best First'.
Re^2: Checking LInes in Text File
by dsheroh (Monsignor) on Jun 01, 2006 at 20:01 UTC
    Successive repeated lines can also be eliminated at the (unixy) command line with uniq infile > outfile, so long as you don't run the data through sort first.

    Also note that, of the solutions provided thus far, the hash-based option is the only one which will both eliminate all duplicates (printing only the first appearance of each line) and also preserve the original order of the (remaining) lines, which may or may not be significant to you.