rickyw59 has asked for the wisdom of the Perl Monks concerning the following question:
Hello, I'm trying to write a script to go through hundreds of "log.gz" files, roughly 500,000 lines per file. Is there something limiting me? How can perl do a single file 3 times faster, but when I start forking perl's performance tanks? Below are the results of timing the parsing of a single file. When timing 70 files, nodejs takes 20 seconds and perl is at 60 seconds.
zcat &> /dev/null 0.54s user 0.01s system 99% cpu 0.549 total
node test.js 0.79s user 0.05s system 130% cpu 0.646 total
perl test.pl 0.23s user 0.03s system 38% cpu 0.686 total
I've tried forking for each file (limited to the number of cpus(24)). I've also tried dividing the logs by number of forks evenly, IE fork 24 times and each fork works n number of files, some how this was slightly slower.
Both node and perl are spawning zcats and parsing line-by-line. I'm unable to use zlib, due to the files being zipped in-correctly by the device generating the logs.
*Edit: the directory is an nfsv3 mounted SAN. For tests, I'm only reading, no printing so IO on the test server should not be an issue. Also both node and perl tests are being run in the same environment.
#!/usr/local/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
my $pm = new Parallel::ForkManager(24);
my $dir = '/data/logs/*.log.gz';
my @files = sort(glob "$dir");
for my $file(@files) {
$pm->start and next;
open(FH,"-|") || exec "/bin/zcat", $file;
while(my $line = <FH>){
my @matches = $line =~ /".*?"|\S+/g;
# print "$matches[0],$matches[1],$matches[3],$matche
+s[4];
#matches[0] = date, matches[1] = time, matches[3] = source IP
#matches[4] = dest IP, some other matches are used or may be used.
#line is space seperated, but any field with a space is inside "", hen
+ce regex instead of split.
}
$pm->finish;
}
$pm->wait_all_children;
Re: Help me beat NodeJS (sleep)
by tye (Sage) on Feb 12, 2016 at 20:34 UTC
|
| [reply] |
|
| [reply] |
|
#!/usr/local/bin/perl
use strict;
use warnings;
use Parallel::ForkManager;
my $pm = new Parallel::ForkManager(24);
$pm->set_waitpid_blocking_sleep(0);
...
| [reply] [d/l] |
|
Re: Help me beat NodeJS
by AnomalousMonk (Archbishop) on Feb 12, 2016 at 19:02 UTC
|
Please see the many postings of BrowserUk in response to questions of the general form "Why doesn't multithreading/multiprocessing solve all my problems?" My impression is that he handles two or three applications like the one you describe before he finishes his first mug of coffee each morning — or maybe it's tea in his case.
Give a man a fish: <%-{-{-{-<
| [reply] [d/l] |
Re: Help me beat NodeJS
by runrig (Abbot) on Feb 12, 2016 at 18:46 UTC
|
open( my $fh, "-|", '/bin/zcat' => $file ) or die "Error opening $file
+: $!";
But since most of the time is spent reading/parsing, this will make negligible improvement performance wise. | [reply] [d/l] |
Re: Help me beat NodeJS
by Laurent_R (Canon) on Feb 12, 2016 at 18:57 UTC
|
Hmm, your process is probably limited by IO throughput, so that using more CPUs won't help much against disk access bottlenecks. In such cases, it often tends to be even somewhat slower, because of the multiple process management overhead.
| [reply] |
Re: Help me beat NodeJS
by marioroy (Parson) on Feb 13, 2016 at 17:23 UTC
|
Update: added pattern matching to the demonstration.
Greetings,
At first glance, am not seeing NodeJS performing regex for fields containing spaces. That's not really fair, IMHO :) The regex in Perl is likely the main reason for taking longer. Having 24 workers reading IO simultaneously from the nfsv3 mounted SAN may be another reason even though NodeJS seems not affected by it. Maybe, the regex statement in Perl is the reason.
The following is an alternative solution. IO is faster for the upcoming 1.700 release versus MCE 1.608. Thus, the note on using 1.699_010. Also, inlined bits for the Windows platform.
#!/usr/local/bin/perl
use strict;
use warnings;
# IO performance in 1.699_010 is faster than MCE 1.608.
# https://metacpan.org/release/MARIOROY/MCE-1.699_010
use MCE::Loop;
use MCE::Candy;
my $dir = 'logs/*.log.gz';
my @files = sort(glob "$dir");
my $pattern = "some_string";
MCE::Loop::init {
gather => MCE::Candy::out_iter_fh(\*STDOUT),
chunk_size => '240k',
max_workers => 24,
use_slurpio => 1,
};
open( my $fh, "-|", "zcat", @files ) or die "open error: $!\n";
mce_loop {
my ( $mce, $slurp_ref, $chunk_id ) = @_;
my $buf = '';
# Quickly determine if a match is found...
# ...and process slurped chunk only if true.
if ( $$slurp_ref =~ /$pattern/m ) {
# The following is fast on Unix, but performance degrades
# drastically on Windows beyond 4 workers.
open my $MEM_FH, '<', $slurp_ref;
while ( my $line = <$MEM_FH> ) {
if ( $line =~ /$pattern/ ) {
my @matches = $line =~ /".*?"|\S+/g;
$buf .= "$matches[0],$matches[1],$matches[3],$matches
+[4]\n";
}
}
close $MEM_FH;
# Therefore, use the following construction on Windows.
# while ( $$slurp_ref =~ /([^\n]+\n)/mg ) {
# # my $line = $1; # possibly save $1 to not lose the value
# # not necessary for this demonstration
# my @matches = $1 =~ /".*?"|\S+/g;
# $buf .= "$matches[0],$matches[1],$matches[3],$matches[4]\
+n";
# }
}
# Send output to the manager process for orderly output to STDOUT
$mce->gather($chunk_id, $buf);
} $fh;
close $fh;
There is one reader across NFS by the manager process only, not workers. The effect is sequential IO which is typically faster than random IO.
Regards, Mario | [reply] [d/l] |
|
Update: added pattern matching to the demonstration
If parallel reads is desired, the following demonstration does the same thing. Basically, specifying chunk_size => 1 is all one needs to do for getting MCE to run like Parallel::ForkManager.
#!/usr/local/bin/perl
use strict;
use warnings;
use MCE::Loop;
my $dir = 'logs/*.log.gz';
my @files = sort(glob "$dir");
my $pattern = "some_string";
MCE::Loop::init {
max_workers => 24,
chunk_size => 1,
};
mce_loop {
my ( $mce, $chunk_ref, $chunk_id ) = @_;
open( my $fh, "-|", "zcat", $chunk_ref->[0] ) or die "open error:
+$!\n";
while ( my $line = <$fh> ) {
if ( $line =~ /$pattern/ ) {
my @matches = $line =~ /".*?"|\S+/g;
print "$matches[0],$matches[1],$matches[3],$matches[4]\n";
}
}
close $fh;
} @files;
Regards, Mario | [reply] [d/l] |
|
Wow thanks, I'll give this a shot. I will have to read some more on MCE, it looks very useful. I should have clarified, in the "parser" function in nodejs, I'm applying the same regex as perl to be fair. I've done the tests looking for a specific string (/some_string/) and I've done the regex in the above code (/".*?"|\S+/g), which captures everything in an array, since the lines are in this format: ' 1970-01-01 00:00:00 1.1.1.1 "A multi-word field" 2.2.2.2 '
| [reply] |
|
|
|
Re: Help me beat NodeJS
by BrowserUk (Patriarch) on Feb 12, 2016 at 22:59 UTC
|
| [reply] |
|
'use strict'
var cluster = require('cluster');
var spawn = require('child_process').spawn;
var stream = require('stream');
var glob = require('glob');
var files = glob.sync('/data/logs/*.log.gz');
function Parser() {
var t = new stream.Transform( { objectMode: true } );
var transform = t._transform;
var flush = t._flush;
var parse = t.parse;
return t;
}
function transform(chunk, enc, done) {
var data = chunk.toString();
//if flush gives us remaining data from last line, prepend to next
+ chunk
if (this._lastLineData) data = this._lastLineData + data;
var lines = data.split('\n');
this._lastLineData = lines[lines.length-1];
for (var i = 0; i < lines.length; i++) {
this.push(parse.lines[i]);
}
done();
}
function flush(done) {
//If remaining data, pass to transform to prepend to next chunk
if (this._lastLineData) this.push(this._lastLineData);
this._lastLineData = null
done()
}
function parse(line) {
//Split line into fields, or apply regex to filter results
return line;
}
if (cluster.isMaster) {
var workers = 24; // number of CPUs
var logsPerWorker = workers / files;
var remainder = workers % files;
var logSlice {};
var sliceStart = 0;
var sliceEnd = 0;
for (var i = 0; i < workers; i++) {
if (remainder) {
sliceEnd++;
remainder--;
}
logSlice[i] = files.slice(sliceStart,sliceEnd);
sliceStart++;
sliceEnd = sliceStart;
if (logsPerWorker === 0 && remainder === 0) {
//if no more logs to distribute, break out of loop
break;
}
}
for (var i = 0; i < workers; i++) {
//Fork and pass list of logs for child to process
cluster.fork({logs: logSlice[i]});
}
} else {
var liner = Parser();
var logs = split.env.logs(','); //read logs passed by parent
for (var i = 0; i < logs.length; i++){
var child = spawn('zcat', [ logs[i] ]);
child.stdout.pipe(liner); //pipe this to stdout or to writeStr
+eam to write to a file
}
}
| [reply] [d/l] |
|
|