http://qs1969.pair.com?node_id=1155109

rickyw59 has asked for the wisdom of the Perl Monks concerning the following question:

Hello, I'm trying to write a script to go through hundreds of "log.gz" files, roughly 500,000 lines per file. Is there something limiting me? How can perl do a single file 3 times faster, but when I start forking perl's performance tanks? Below are the results of timing the parsing of a single file. When timing 70 files, nodejs takes 20 seconds and perl is at 60 seconds.

zcat &> /dev/null 0.54s user 0.01s system 99% cpu 0.549 total node test.js 0.79s user 0.05s system 130% cpu 0.646 total perl test.pl 0.23s user 0.03s system 38% cpu 0.686 total

I've tried forking for each file (limited to the number of cpus(24)). I've also tried dividing the logs by number of forks evenly, IE fork 24 times and each fork works n number of files, some how this was slightly slower. Both node and perl are spawning zcats and parsing line-by-line. I'm unable to use zlib, due to the files being zipped in-correctly by the device generating the logs.

*Edit: the directory is an nfsv3 mounted SAN. For tests, I'm only reading, no printing so IO on the test server should not be an issue. Also both node and perl tests are being run in the same environment.

#!/usr/local/bin/perl use strict; use warnings; use Parallel::ForkManager; my $pm = new Parallel::ForkManager(24); my $dir = '/data/logs/*.log.gz'; my @files = sort(glob "$dir"); for my $file(@files) { $pm->start and next; open(FH,"-|") || exec "/bin/zcat", $file; while(my $line = <FH>){ my @matches = $line =~ /".*?"|\S+/g; # print "$matches[0],$matches[1],$matches[3],$matche +s[4]; #matches[0] = date, matches[1] = time, matches[3] = source IP #matches[4] = dest IP, some other matches are used or may be used. #line is space seperated, but any field with a space is inside "", hen +ce regex instead of split. } $pm->finish; } $pm->wait_all_children;