in reply to Re^4: Working with fixed length files
in thread Working with fixed length files

You are benchmarking the code from the original nodes, which as I mentioned, operate on different assumptions.

Ike's assumption means the while loop only iterates half as many time as it does for mine. The differences you are measuring are down to that.

If you modify Ike's to read one record at a time and operate upon it conditionally (per my benchmark), or modify mine to read and map the pairs of records into a single pre-partitioned buffer thereby removing the need for the if statment in the loop, then you would be comparing like with like.

I also tweeked my benchmark code to a) use a fixed size read thereby avoiding the newline search; b) changed the condition of the loop so that I could assign the return from readline directly to the mapped buffer avoiding another copy.

This was to ensure that the differences being tested were down to the unpack .versus. substr refs, not the ancilliary details of code written to demonstate the technique, not performance.

For more performance, do away with the substr and read directly into the partitioned scalar:

#! perl -slw use strict; use Time::HiRes qw[ time ]; my $start = time; my $rec = chr(0) x 123; my @type3l = split ':', '02:10:33:15:19:10:3:18:6:4'; my $n = 0; my @type3o = map{ $n += $_; $n - $_; } @type3l; my @type3 = map \substr( $rec, $type3o[ $_ ], $type3l[ $_ ] ), 0 .. $# +type3o; my @typeOl = split ':', '02:98:11:9'; $n = 0; my @typeOo = map{ $n += $_; $n - $_; } @typeOl; my @typeO = map \substr( $rec, $typeOo[ $_ ], $typeOl[ $_ ] ), 0 .. $# +typeOo; until( eof() ) { read( ARGV, $rec, 123, 0 ); if( $rec =~ /^03/ ) { print join '/', map $$_, @type3; } else { print join '|', map $$_, @typeO; } } printf STDERR "Took %.3f for $. lines\n", time() - $start;

And for ultimate performance, switch to binmode & sysread to avoid Windows crlf layer overhead. But it requires other tweaks also and I'm 21 hours into this day already.

But whatever, you do need to be comparing like with like.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^6: Working with fixed length files
by Tux (Canon) on Apr 28, 2011 at 11:30 UTC

    I think all you mention has already been said in the other answers. If I switch to

    sysread

    and compare buk with ike1 (reading one line at the time) and ike2 (reading two lines at the time), your method wins a bit but leaves all three methods withing the noise level. Also note how *all* numbers go up!

    With these rates, I wonder if I would still use your substr ref method or the unpack approach, as I find the latter way easier to read and maintain. If however performance is vital, maybe XS code would squeeze out even more (with pre-bound variables).

    Rate ike1 buk ike2 ike1 384977/s -- -2% -3% buk 392677/s 2% -- -1% ike2 394907/s 3% 1% --

    Note that if you keep local $/ = \122; in ike1, it has a *huge* influence on performance, even if you'd say that it should not be used:

    Rate ike1 buk ike2 ike1 275687/s -- -27% -30% buk 378299/s 37% -- -4% ike2 392677/s 42% 4% --

    Enjoy, Have FUN! H.Merijn

      The reason all the code runs the same speed is because sysread doesn't work on ramfiles, so the loops are never being entered.

      That also explains the dramatic slowdown affect of local $/ = \nnn;. It adds an operation to a call that does almost nothing, and twice almost nothing is longer than 1 times almost nothing.

      Which brings up another mystery entitled: "The Strange Case of the Disappearing AutoDie".


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        autodie was removed so I could bench on older perls.

        Good analisys! That should teach me :/

        sysread basically means "bypass PerlIO, do a read ()", so that means PerlIO::scalar doesn't get a say in it. However whether that is how things should be is a different matter.


        Enjoy, Have FUN! H.Merijn

        With all sysread's re-placed by read's my bench shows much more reliable figures:

        Rate buk ike1 ike2 buk 81.1/s -- -43% -46% ike1 143/s 76% -- -5% ike2 150/s 85% 5% --

        I think it would be hard to reduce the overhead even more.


        Enjoy, Have FUN! H.Merijn
      Note that if you keep local $/ = \122; in ike1, it has a *huge* influence on performance,

      That's may be because as you haven't used binmode, IO layers are still in force and are checking for the default input delimiter (newlines) even though they are not being used. By setting $/ = \nnn, it stops the input buffer being scanned as it is loaded. (Or something like that. :)

      I'd expect to see similar changes with $/ = \nnn in the other routines to.

      I really like your idea of binding unpack templates to an array of aliases to partitions of a buffer. Effectively 'compiling' the template much as /o (used) to compile regexes.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        In a dedicated benchmark to test this, binmode doesn't change anything at all:

        with a binmode call: Rate with_rs without with_rs 254857/s -- -34% without 384458/s 51% -- without a binmode call: Rate with_rs without with_rs 260564/s -- -31% without 375127/s 44% --

        I've just posted the question to the perl5 porters.


        Enjoy, Have FUN! H.Merijn

      There is something wrong with this benchmark. I don't know what it is yet, but there is definitely something wrong.

      Your numbers show, and I get the same results here, that Ike1 & Ike2 run in almost identical time.

      This, despite that Ike1 loops twice as many times and make twice as many calls per loop to unpack and makes twice as many calls per loop to sysread. So 4 times as many calls to each over all, Ie 8 times as many calls in total!

      That flies in the face of everything we know about performant Perl code. Sorry, but that simply cannot be true.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.