If purchance it is necessary for you to keep the unique lines of your file in the same order, then this will remove all but the first occurance of each line and leave the remaining ones in their original order.
Just redirect the output to a new file on the command line (and uncomment the open line).
#! perl -sw use strict; my %lines; #open DATA, $ARGV[0] or die "Couldn't open $ARGV[0]: $!\n"; while (<DATA>) { print if not $lines{$_}++; } __DATA__ this is a line this is another line yet another and yet another still this is a line more and more and even more this is a line and this and that but not the other cos its a family website:)
Gives
C:\test>uniq this is a line this is another line yet another and yet another still more and more and even more and this and that but not the other cos its a family website:) C:\test>
The caveat of course is that with a large file, that hash could get mind of big, but maybe that's ok if this is what you need to do.
In reply to Re: Remove Duplicate Lines
by BrowserUk
in thread Remove Duplicate Lines
by dcb0127
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |