From: "J. Gleixner" Message-ID: <460bd4d0$0$494$815e3792@news.qwest.net> Date: Thu, 29 Mar 2007 10:01:36 -0500 cadetg@googlemail.com wrote: > Dear Perl Monks, I am developing at the moment a script which has to > parse 20GB files. The files I have to parse are some logfiles. My > problem is that it takes ages to parse the files. I am doing something > like this: You might be better off using a large egrep and/or by simplifying your items. e.g. if your item contained 'abc' and 'abcd', you would only have to search for 'abc'. > > my %lookingFor; > # keys => different name of one subset > # values => array of one subset > > my $fh = new FileHandle "< largeLogFile.log"; > while (<$fh>) { > foreach my $subset (keys %lookingFor) { > foreach my $item (@{$subset}) { > if (<$fh> =~ m/$item/) { > my $writeFh = new FileHandle ">> myout.log"; print $writeFh < > $fh>; > } Open it once, before the while, and write $_, not <$fh>. print $writeFh $_ if /$item/; > } > }