in reply to Re^2: Working with fixed length files
in thread Working with fixed length files

  1. Ike's code assumes a one-to-one correspondence between the two record types.

    Well founded based on the OPs sample, but these type of mainframe 'carded' records often have multiple secondary records to each primary record.

  2. If the OP confirmed that they were one-to-one, then you could also do a single read for both record types and pre-partition also.
  3. The problem with unpack is that the template must be re-parsed for every record.

    And recent fairly extensive additions to the format specifications have taken some toll on performance.

    With these short, simply structured records that doesn't exact too much of a penalty, but with longer, more complex records it can.

  4. The idea of pre-partitioning the input buffer with an array of substr refs is that simply assigning each record into the pre-partitioned buffer effectively does the parsing and splitting.

    I think the technique is worth a mention for its own sake.

A quick run of the two posted programs over the same file shows mine to be a tad quicker, but insignificantly. If I adjust mine to the same assumptions as Ike's, (or Ike's to the same assumptions as mine), then mine comes in ~20% quicker. Only a couple of seconds on 1e6 lines, but could be worth having for 100e6.

c:\test>901649-buk 901649.dat >nul Took 9.283 for 1000000 lines c:\test>901649-ike 901649.dat >nul Took 11.305 for 1000000 lines

Code tested:

#! perl -slw use strict; use Time::HiRes qw[ time ]; my $start = time; my $rec = chr(0) x 123; my @type3l = split ':', '02:10:33:15:19:10:3:18:6:4'; my $n = 0; my @type3o = map{ $n += $_; $n - $_; } @type3l; my @type3 = map \substr( $rec, $type3o[ $_ ], $type3l[ $_ ] ), 0 .. $# +type3o; my @typeOl = split ':', '02:98:11:9'; $n = 0; my @typeOo = map{ $n += $_; $n - $_; } @typeOl; my @typeO = map \substr( $rec, $typeOo[ $_ ], $typeOl[ $_ ] ), 0 .. $# +typeOo; $/ = \123; until( eof() ) { substr( $rec, 0 ) = <>; if( $rec =~ /^03/ ) { print join '/', map $$_, @type3; } else { print join '|', map $$_, @typeO; } } printf STDERR "Took %.3f for $. lines\n", time() - $start;
#! perl -slw use strict; use Time::HiRes qw[ time ]; my $start = time; $/ = \123; while( <> ) { if( /^03/ ) { my @fields = unpack "A2 A10 A33 A15 A19 A10 A3 A18 A6 A4 x3", +$_; print join '/', @fields; } else { my @fields = unpack "A2 A98 A11 A9 x3", $_; print join '|', @fields; } } printf STDERR "Took %.3f for $. lines\n", time() - $start;

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^4: Working with fixed length files
by Tux (Canon) on Apr 28, 2011 at 08:47 UTC

    WOW. I'm surprised. Really. I do understand your code and the way it works, but that it outperforms unpack surprises me.

    Combining the two techniques makes me fantasize about bindcolumns for unpack. I'm convinced that the delay for unpack is not the parsing of the format, but the creation and copying of the scalars on the stack and into the target list.

    /me has more wishes for unpack, like unpacking from a stream that automatically moves forward for all bytes/characters read for the unpack.


    Enjoy, Have FUN! H.Merijn
Re^4: Working with fixed length files
by Tux (Canon) on Apr 28, 2011 at 09:06 UTC

    Unless one of you can prove my benchmark is wrong, I do see exactly what I expected:

    $ perl test.pl Rate buk ike buk 71.1/s -- -36% ike 111/s 56% -- $

    The DATA section in the script has trailing \r's:


    Enjoy, Have FUN! H.Merijn

      You are benchmarking the code from the original nodes, which as I mentioned, operate on different assumptions.

      Ike's assumption means the while loop only iterates half as many time as it does for mine. The differences you are measuring are down to that.

      If you modify Ike's to read one record at a time and operate upon it conditionally (per my benchmark), or modify mine to read and map the pairs of records into a single pre-partitioned buffer thereby removing the need for the if statment in the loop, then you would be comparing like with like.

      I also tweeked my benchmark code to a) use a fixed size read thereby avoiding the newline search; b) changed the condition of the loop so that I could assign the return from readline directly to the mapped buffer avoiding another copy.

      This was to ensure that the differences being tested were down to the unpack .versus. substr refs, not the ancilliary details of code written to demonstate the technique, not performance.

      For more performance, do away with the substr and read directly into the partitioned scalar:

      #! perl -slw use strict; use Time::HiRes qw[ time ]; my $start = time; my $rec = chr(0) x 123; my @type3l = split ':', '02:10:33:15:19:10:3:18:6:4'; my $n = 0; my @type3o = map{ $n += $_; $n - $_; } @type3l; my @type3 = map \substr( $rec, $type3o[ $_ ], $type3l[ $_ ] ), 0 .. $# +type3o; my @typeOl = split ':', '02:98:11:9'; $n = 0; my @typeOo = map{ $n += $_; $n - $_; } @typeOl; my @typeO = map \substr( $rec, $typeOo[ $_ ], $typeOl[ $_ ] ), 0 .. $# +typeOo; until( eof() ) { read( ARGV, $rec, 123, 0 ); if( $rec =~ /^03/ ) { print join '/', map $$_, @type3; } else { print join '|', map $$_, @typeO; } } printf STDERR "Took %.3f for $. lines\n", time() - $start;

      And for ultimate performance, switch to binmode & sysread to avoid Windows crlf layer overhead. But it requires other tweaks also and I'm 21 hours into this day already.

      But whatever, you do need to be comparing like with like.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        I think all you mention has already been said in the other answers. If I switch to

        sysread

        and compare buk with ike1 (reading one line at the time) and ike2 (reading two lines at the time), your method wins a bit but leaves all three methods withing the noise level. Also note how *all* numbers go up!

        With these rates, I wonder if I would still use your substr ref method or the unpack approach, as I find the latter way easier to read and maintain. If however performance is vital, maybe XS code would squeeze out even more (with pre-bound variables).

        Rate ike1 buk ike2 ike1 384977/s -- -2% -3% buk 392677/s 2% -- -1% ike2 394907/s 3% 1% --

        Note that if you keep local $/ = \122; in ike1, it has a *huge* influence on performance, even if you'd say that it should not be used:

        Rate ike1 buk ike2 ike1 275687/s -- -27% -30% buk 378299/s 37% -- -4% ike2 392677/s 42% 4% --

        Enjoy, Have FUN! H.Merijn
      perl version?

      With your program I get 'x' outside of string in unpack because of the x2, after removing those, I get

      Rate buk ike buk 19.6/s -- -43% ike 34.1/s 74% -- $ perl -e "die $^V" v5.12.2
      On 5.008009 I get
      Rate buk ike buk 22.7/s -- -35% ike 35.1/s 54% --
      Rate buk ike buk 24.9/s -- -47% ike 47.1/s 89% -- $ ..\perl.exe -e " die $^V" v5.14.0
      This is typical win32 mingw/activestate build

      update: Well you didn't copy buk's code exactly, you omitted

      local $/ = \(2 * 122);

      which appears critical
      5.008009 Rate ike buk ike 35.5/s -- -57% buk 83.1/s 134% -- v5.12.2 Rate ike buk ike 33.6/s -- -55% buk 74.4/s 121% -- v5.14.0 Rate ike buk ike 46.3/s -- -48% buk 88.2/s 91% --

        I re-read BrowserUk's post, and I still don't see that line. And yes, I copied it exactly.

        The x2 error you see is because you didn't add the \r's to the DATA section as I wrote in the introduction line. They get lost when posting code on PM.

        Adding that line to his code is unfair, as that will skip half of the data. Fair would be to use \122, but that doesn't change much:

        === base/perl5.8.9 5.008009 i686-linux-64int Rate buk ike buk 66.7/s -- -39% ike 109/s 63% -- === base/tperl5.8.9 5.008009 i686-linux-thread-multi-64int-ld Rate buk ike buk 61.1/s -- -37% ike 96.7/s 58% -- === base/perl5.10.1 5.010001 i686-linux-64int Rate buk ike buk 63.3/s -- -39% ike 104/s 65% -- === base/tperl5.10.1 5.010001 i686-linux-thread-multi-64int-ld Rate buk ike buk 56.1/s -- -37% ike 88.8/s 58% -- === base/perl5.12.2 5.012002 i686-linux-64int Rate buk ike buk 62.5/s -- -41% ike 105/s 69% -- === base/tperl5.12.2 5.012002 i686-linux-thread-multi-64int-ld Rate buk ike buk 54.5/s -- -38% ike 88.4/s 62% -- === base/perl5.14.0 5.014000 i686-linux-64int Rate buk ike buk 60.6/s -- -48% ike 116/s 92% -- === base/tperl5.14.0 5.014000 i686-linux-thread-multi-64int-ld Rate buk ike buk 53.8/s -- -49% ike 105/s 96% --

        Enjoy, Have FUN! H.Merijn
Re^4: Working with fixed length files
by Tux (Canon) on Apr 28, 2011 at 09:27 UTC

    Your third point made me curious. Running the below benchmark doesn't show a serious slowdown for the unpack code:

    Running perl-all test.pl === base/perl5.8.9 5.008009 i686-linux-64int Rate buk ike buk 65.4/s -- -41% ike 110/s 68% -- === base/tperl5.8.9 5.008009 i686-linux-thread-multi-64int-ld Rate buk ike buk 60.8/s -- -37% ike 95.9/s 58% -- === base/perl5.10.1 5.010001 i686-linux-64int Rate buk ike buk 61.9/s -- -39% ike 102/s 65% -- === base/tperl5.10.1 5.010001 i686-linux-thread-multi-64int-ld Rate buk ike buk 55.4/s -- -37% ike 88.4/s 60% -- === base/perl5.12.2 5.012002 i686-linux-64int Rate buk ike buk 63.0/s -- -41% ike 107/s 70% -- === base/tperl5.12.2 5.012002 i686-linux-thread-multi-64int-ld Rate buk ike buk 54.3/s -- -39% ike 88.4/s 63% -- === base/perl5.14.0 5.014000 i686-linux-64int Rate buk ike buk 59.9/s -- -49% ike 117/s 96% -- === base/tperl5.14.0 5.014000 i686-linux-thread-multi-64int-ld Rate buk ike buk 52.8/s -- -49% ike 104/s 97% --

    Enjoy, Have FUN! H.Merijn