In order to speed up the search, I dare to suggest to choose a large value of n,
Don't assume that the bigger the read, the faster it will run, it just doesn't work out that way.
On my systems, 64kb reads work out marginally best (YMMV):
C:\test>junk -B=4 < 1gb.dat Found 6559 matches in 10.778 seconds using 4 kb reads C:\test>junk -B=64 < 1gb.dat Found 6559 matches in 10.567 seconds using 64 kb reads C:\test>junk -B=256 < 1gb.dat Found 6559 matches in 10.574 seconds using 256 kb reads C:\test>junk -B=1024 < 1gb.dat Found 6559 matches in 10.938 seconds using 1024 kb reads C:\test>junk -B=4096 < 1gb.dat Found 6559 matches in 10.995 seconds using 4096 kb reads C:\test>junk -B=65536 < 1gb.dat Found 6559 matches in 12.533 seconds using 65536 kb reads
Code:
#! perl -slw use strict; use Time::HiRes qw[ time ]; our $B //= 64; $/ = \( $B *1024 ); binmode STDIN, ':raw:perlio'; my $start = time; my $count = 0; while( <STDIN> ) { ++$count while m[123]g; } printf "Found %d matches in %.3f seconds using %d kb reads\n", $count, time()-$start, $B;
In reply to Re^3: use regular expressions across multiple lines from a very large input file
by BrowserUk
in thread use regular expressions across multiple lines from a very large input file
by rizzy
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |