Greetings, fellow Monks.
I came across an old thread. One might do the following to consume extra CPU cores. The pigz binary is useful and depending on the data, may run faster than gzip. The requirement may be to have each MCE worker process a single file inside the MCE loop. So we set chunk size accordingly (chunk_size => 1).
To make this more interesting, workers send data to STDOUT and gather key-value pairs.
use strict; use warnings; use feature qw(say); use MCE::Loop chunk_size => 1, max_workers => 4; my @files = glob '*.gz'; my %result = mce_loop { my ($mce, $chunk_ref, $chunk_id) = @_; ## $file = $_; same thing when chunk_size => 1 my $file = $chunk_ref->[0]; ## http://www.zlib.net/pigz/ ## For pigz, we want -p1 to run on one core only. ## open my $fh, '-|', 'pigz', '-dc', '-p1', $file or do { ... } open my $fh, '-|', 'gzip', '-dc', $file or do { warn "open error ($file): $!\n"; MCE->next(); }; my $count = 0; while ( my $line = <$fh> ) { $count++; # simulate filtering or processing } close $fh; ## Send output to the manager process. ## Ensures workers do not garble STDOUT. MCE->say("$file: $count lines"); ## Gather key-value pair. MCE->gather($file, $count); } @files; ## Workers may persist after running. Request workers to exit. MCE::Loop->finish(); ## Ditto, same output using gathered data. for my $file (@files) { say "$file: ", $result{$file}, " lines"; }
Regards, Mario.
In reply to Fast gzip log reader with MCE by marioroy
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |