Update: The time is 2.2 seconds using the same demonstration below on a Mac running the upcoming MCE 1.706 release. Running with four workers also completes in 2.2 seconds. Basically, have reached the the underlying hardware limitation.
Today, I looked at MCE to compare against the 2 GB plain text file residing in FS cache and not. Increasing the chunk_size value is beneficial, especially when the file does not exists in OS level FS cache.
With an update to the code, simply by increasing the chunk_size value from '1m' to '24m', the total time now takes 3.2 seconds to complete.
use strict; use warnings; use MCE::Flow; use MCE::Shared; my $counter1 = MCE::Shared->scalar( 0 ); my $counter2 = MCE::Shared->scalar( 0 ); mce_flow_f { chunk_size => '24m', max_workers => 8, use_slurpio => 1, }, sub { my ( $mce, $chunk_ref, $chunk_id ) = @_; my $numlines = $$chunk_ref =~ tr/\n//; my $occurances = () = $$chunk_ref =~ /123456\r?$/mg; $counter1->incrby( $numlines ); $counter2->incrby( $occurances ); }, "Dictionary2GB.txt"; print "Num lines : ", $counter1->get(), "\n"; print "Occurances: ", $counter2->get(), "\n";
One day, I will try another technique inside MCE to see if IO performance can be improved upon.
Resolved.
In reply to Re: How to optimize a regex on a large file read line by line ?
by marioroy
in thread How to optimize a regex on a large file read line by line ?
by John FENDER
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |