while (<$in>)
{
if (/DATAmessage.*3\.0/)
{
print $OUT "$filename\n";
last; #no need to look anymore!
}
}
If whatever you are looking for usually appears near the beginning of the file, performance gain will be substantial.
update:
Another place to use a List::Util function:
{ my %unique;
print $OUT sort grep { ! $unique{ $_ }++ } <$IN>;
}
#####
again use List::Util to speed up Perl implementation...
####
use List::Util qw(any uniq);
print $OUT sort uniq <$IN>;
I suppose that depending upon the data, it could be that reversing the order, i.e., sorting and then filtering out uniq lines would be faster? Don't know. But if speed is needed, I would also benchmark that approach. Also, instead of building a hash table, try: "print line unless its a repeat of previous line". Results probably depend upon what typical data actually looks like. For example:
my $prev = "";
foreach (sort <$IN>)
{
print unless $_ eq $prev;
$prev = $_;
}
|