in reply to Re^5: Faster and more efficient way to read a file vertically
in thread Faster and more efficient way to read a file vertically

Not "flaws", but it's measuring performance per size of read buffer... (something LanX was saying needs tuning)

~$ perl vert4.pl Benchmark: timing 3 iterations of 1, 10, 100, 1000, 10000, 100000, 100 +0000... 1: 1 wallclock secs ( 0.94 usr + 0.01 sys = 0.95 CPU) @ 3 +.16/s (n=3) (warning: too few iterations for a reliable count) 10: 1 wallclock secs ( 0.23 usr + 0.02 sys = 0.25 CPU) @ 12 +.00/s (n=3) (warning: too few iterations for a reliable count) 100: 0 wallclock secs ( 0.17 usr + 0.01 sys = 0.18 CPU) @ 16 +.67/s (n=3) (warning: too few iterations for a reliable count) 1000: 0 wallclock secs ( 0.16 usr + 0.03 sys = 0.19 CPU) @ 15 +.79/s (n=3) (warning: too few iterations for a reliable count) 10000: 0 wallclock secs ( 0.18 usr + 0.00 sys = 0.18 CPU) @ 16 +.67/s (n=3) (warning: too few iterations for a reliable count) 100000: 0 wallclock secs ( 0.20 usr + 0.03 sys = 0.23 CPU) @ 13 +.04/s (n=3) (warning: too few iterations for a reliable count) 1000000: 1 wallclock secs ( 0.23 usr + 0.03 sys = 0.26 CPU) @ 11 +.54/s (n=3) (warning: too few iterations for a reliable count)
use strict; use warnings; use Benchmark qw{ cmpthese timethese }; use Test::More qw{ no_plan }; my $fn = 'dna.txt'; unless ( -e $fn ) { open my $fh, '>', $fn; print $fh random_regex( '[ACTG]{42}' ), "\n" for 1 .. 1e6; } open my $inFH, q{<}, $fn or die $!; my $offset = 9; # Column 10 if numbering from 1 my @a = qw( 1 10 100 1000 10000 100000 1000000 ); sub method { seek $inFH, 0, 0; my $buffer = <$inFH>; my $lineLen = length $buffer; # my $nLines = 500; my $nLines = shift; my $chunkSize = $lineLen * $nLines; seek $inFH, 0, 0; my $retStr; my $mask = qq{\x00} x ${offset} . qq{\xff} . qq{\x00} x ( $lineLen - $offset - 1 ); $mask x= $nLines; while ( my $bytesRead = read $inFH, $buffer, $chunkSize ) { ( my $anded = $buffer & $mask ) =~ tr{\x00}{}d; $retStr .= $anded; } return \ $retStr; }; timethese( 3, { map { $_, "method( $_ )"} @a } );

Replies are listed 'Best First'.
Re^7: Faster and more efficient way to read a file vertically
by johngg (Canon) on Nov 07, 2017 at 16:39 UTC

    I guess that the tuning parameters will vary depending on the specification of the target system and the line length of the data file. On your system the best performance (without narrowing it down further) looks to be with a 10,000 line buffer. On my rather elderly, vintage 2008 IIRC, Core 2 Duo laptop the sweet spot is around 1,000 lines for both the unpack and mask methods. Working on a 2,500,000 line file with 51 byte (inc. line terminator) lines I get the following ...

    ok 1 - ANDmask ok 2 - unpackM Rate u950 u1050 u900 u1100 u1000 A950 A1050 A1100 A900 A10 +00 u950 1.28/s -- -0% -0% -1% -1% -39% -39% -39% -39% -4 +1% u1050 1.28/s 0% -- -0% -0% -1% -39% -39% -39% -39% -4 +1% u900 1.28/s 0% 0% -- -0% -1% -39% -39% -39% -39% -4 +1% u1100 1.28/s 1% 0% 0% -- -1% -39% -39% -39% -39% -4 +1% u1000 1.29/s 1% 1% 1% 1% -- -38% -38% -39% -39% -4 +0% A950 2.10/s 65% 64% 64% 64% 62% -- -0% -0% -0% - +3% A1050 2.10/s 65% 64% 64% 64% 63% 0% -- -0% -0% - +3% A1100 2.10/s 65% 64% 64% 64% 63% 0% 0% -- -0% - +3% A900 2.11/s 65% 65% 65% 64% 63% 0% 0% 0% -- - +2% A1000 2.16/s 69% 69% 69% 68% 67% 3% 3% 3% 2% +-- 1..2

    ... with this code.

    Cheers,

    JohnGG

      Hi, johngg, I rather tried to communicate that all contestants should be placed in equal conditions, i.e. let them all read in chunks, rather than in single lines, as it was for some of them. But all the same your "ANDmask" is fastest.

      Which is weird. Simple act of extracting some part of data, instead of just indexing, requires, to be fast, modification of unrelated data. Sure, it's because of speed of "anding" and transliteration, but still... Therefore, here is something completely different (sorry I keep adjusting your setup to suite my "dna.txt"):

      use strict; use warnings; use Benchmark qw{ cmpthese timethese }; use Test::More qw{ no_plan }; use String::Random 'random_regex'; my $fn = 'dna.txt'; unless ( -e $fn ) { open my $fh, '>', $fn; print $fh random_regex( '[ACTG]{42}' ), "\n" for 1 .. 1e6; } open my $inFH, q{<}, $fn or die $!; binmode $inFH; my $buffer = <$inFH>; my $lineLen = length $buffer; my $nLines = 500; my $chunkSize = $lineLen * $nLines; my $offset = 9; # Column 10 if numbering from 1 my %methods = ( ANDmask => sub { # Multi-line AND mask by johngg seek $inFH, 0, 0; my $retStr; my $mask = qq{\x00} x ${offset} . qq{\xff} . qq{\x00} x ( $lineLen - $offset - 1 ); $mask x= $nLines; while ( my $bytesRead = read $inFH, $buffer, $chunkSize ) { ( my $anded = $buffer & $mask ) =~ tr{\x00}{}d; $retStr .= $anded; } return \ $retStr; }, pdl => sub { seek $inFH, 0, 0; use PDL; my $retStr; my $chunkPDL = zeroes( byte, $lineLen, $nLines ); my $bufRef = $chunkPDL-> get_dataref; while ( my $bytesRead = read $inFH, $$bufRef, $chunkSize ) { my $lastLine = $bytesRead / $lineLen - 1; $retStr .= ${ $chunkPDL-> slice( "$offset,0:$lastLine" ) -> get_dataref } } return \ $retStr; }, ); ok ${ $methods{ ANDmask }-> ()} eq ${ $methods{ pdl }-> ()}; cmpthese( -10, { map { $_ => $methods{ $_ } } keys %methods });

      >perl vert5.pl ok 1 Rate ANDmask pdl ANDmask 7.86/s -- -55% pdl 17.3/s 120% -- 1..1

        Interesting! I don't have PDL installed on my system but, reading about it, it seems that it contains highly optimised routines for array handling largely written in C. This would probably account for the increase in speed. I'm not a geneticist so I don't know if the OP's problem is a real-world scenario but I wonder if PDL has been used widely in the field. Thank you for pointing it out.

        Cheers,

        JohnGG