i am writing a web log parsing thingie. it is looking for a couple of regexes and counts some stuff, really simple. on a 200M logfile, it takes around 8.35s real 8.20s user 0.14s system with ONE $mask. as soon as i put 2 or more, the time goes over 100s... i tried to profile, but -d:DProf does not profile builtins.
the script is "learning" the filenames in a particlar path, and pushing them on the array. so before the pushing it checks if there is already a file with the same name.
(all this on red hat ES3)$ cat test.pl use strict; use File::Basename; my @count = (); my @mask = ( '/some/path/', 'some/other/path', 'another/path', ); open F, "access.log"; while (<F>) { chomp; foreach my $m (@mask) { my $regex = "GET.*" . $m . ".*HTTP/1.1\" 200 [0-9].*"; if (/$regex/) { s/.*GET //; s/ HTTP.*//; my $bn = basename($_); my $found = 0; foreach (@count) { if ($_->[0] eq $bn) { $_->[1]++; $found = 1; last; } } if ($found == 0) { push @count, [ $bn, 1 ]; } } } } foreach (@count) { print "$_->[0] = $_->[1]\n"; } close F;
In reply to log parsing very slow by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |