in reply to Working with fixed length files

Here is a different strategy for tackling the problem that can have some serious performance advantages:

#! perl -slw use strict; my $rec = chr(0) x 123; my @type3l = split ':', '02:10:33:15:19:10:3:18:6:4'; my $n = 0; my @type3o = map{ $n += $_; $n - $_; } @type3l; my @type3 = map \substr( $rec, $type3o[ $_ ], $type3l[ $_ ] ), 0 .. $# +type3o; my @typeOl = split ':', '02:98:11:9'; $n = 0; my @typeOo = map{ $n += $_; $n - $_; } @typeOl; my @typeO = map \substr( $rec, $typeOo[ $_ ], $typeOl[ $_ ] ), 0 .. $# +typeOo; while( <DATA> ) { substr( $rec, 0 ) = $_; if( /^03/ ) { print join '/', map $$_, @type3; } else { print join '|', map $$_, @typeO; } } __DATA__ 03002068454210482 000000004204.572011-04-14 + 19:53:41INTERNET C 750467375 ^M 0214833 + G02042954 ^M 03002068703214833 000000002558.662011-04-15 + 08:17:19INTERNET C 761212737 ^M 0211561 + 05601207284 ^M 03002068802911561 000000001463.702011-04-15 + 08:40:52INTERNET C 719807216 ^M 029911 + 00100275296 ^M

Produces:

c:\test>junk92 03/0020684542/10482 /000000004204.57/2011-0 +4-14 19:53:41/INTERNET /C /750467375 / / 02|14833 + |G02042954 | 03/0020687032/14833 /000000002558.66/2011-0 +4-15 08:17:19/INTERNET /C /761212737 / / 02|11561 + |05601207284| 03/0020688029/11561 /000000001463.70/2011-0 +4-15 08:40:52/INTERNET /C /719807216 / / 02|9911 + |00100275296| [23:23:38.05] c:\test>

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^2: Working with fixed length files
by Tux (Canon) on Apr 28, 2011 at 06:17 UTC

    In theory ikegami's unpack approach should be multitudes faster than the substr approach, as unpack is one single OP. This reference approach should be somewhere in between. I'm curious how a Benchmark would relate the three on the original sized files and if disk-io actually minimizes the effect of the parsing speed difference.


    Enjoy, Have FUN! H.Merijn

      1. Ike's code assumes a one-to-one correspondence between the two record types.

        Well founded based on the OPs sample, but these type of mainframe 'carded' records often have multiple secondary records to each primary record.

      2. If the OP confirmed that they were one-to-one, then you could also do a single read for both record types and pre-partition also.
      3. The problem with unpack is that the template must be re-parsed for every record.

        And recent fairly extensive additions to the format specifications have taken some toll on performance.

        With these short, simply structured records that doesn't exact too much of a penalty, but with longer, more complex records it can.

      4. The idea of pre-partitioning the input buffer with an array of substr refs is that simply assigning each record into the pre-partitioned buffer effectively does the parsing and splitting.

        I think the technique is worth a mention for its own sake.

      A quick run of the two posted programs over the same file shows mine to be a tad quicker, but insignificantly. If I adjust mine to the same assumptions as Ike's, (or Ike's to the same assumptions as mine), then mine comes in ~20% quicker. Only a couple of seconds on 1e6 lines, but could be worth having for 100e6.

      c:\test>901649-buk 901649.dat >nul Took 9.283 for 1000000 lines c:\test>901649-ike 901649.dat >nul Took 11.305 for 1000000 lines

      Code tested:


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        WOW. I'm surprised. Really. I do understand your code and the way it works, but that it outperforms unpack surprises me.

        Combining the two techniques makes me fantasize about bindcolumns for unpack. I'm convinced that the delay for unpack is not the parsing of the format, but the creation and copying of the scalars on the stack and into the target list.

        /me has more wishes for unpack, like unpacking from a stream that automatically moves forward for all bytes/characters read for the unpack.


        Enjoy, Have FUN! H.Merijn

        Unless one of you can prove my benchmark is wrong, I do see exactly what I expected:

        $ perl test.pl Rate buk ike buk 71.1/s -- -36% ike 111/s 56% -- $

        The DATA section in the script has trailing \r's:


        Enjoy, Have FUN! H.Merijn

        Your third point made me curious. Running the below benchmark doesn't show a serious slowdown for the unpack code:

        Running perl-all test.pl === base/perl5.8.9 5.008009 i686-linux-64int Rate buk ike buk 65.4/s -- -41% ike 110/s 68% -- === base/tperl5.8.9 5.008009 i686-linux-thread-multi-64int-ld Rate buk ike buk 60.8/s -- -37% ike 95.9/s 58% -- === base/perl5.10.1 5.010001 i686-linux-64int Rate buk ike buk 61.9/s -- -39% ike 102/s 65% -- === base/tperl5.10.1 5.010001 i686-linux-thread-multi-64int-ld Rate buk ike buk 55.4/s -- -37% ike 88.4/s 60% -- === base/perl5.12.2 5.012002 i686-linux-64int Rate buk ike buk 63.0/s -- -41% ike 107/s 70% -- === base/tperl5.12.2 5.012002 i686-linux-thread-multi-64int-ld Rate buk ike buk 54.3/s -- -39% ike 88.4/s 63% -- === base/perl5.14.0 5.014000 i686-linux-64int Rate buk ike buk 59.9/s -- -49% ike 117/s 96% -- === base/tperl5.14.0 5.014000 i686-linux-thread-multi-64int-ld Rate buk ike buk 52.8/s -- -49% ike 104/s 97% --

        Enjoy, Have FUN! H.Merijn
Re^2: Working with fixed length files
by vendion (Scribe) on Apr 29, 2011 at 13:31 UTC

    Your code looks really nice and I think I may be able to use it, or at least a approach similar to this, the only question that is flagged is how would this handle data from a file that uses one patteren through out the file? It seems this line in my OP was over looked "in all I am working with four files and this is the only one that differs like this."

    While one of the four is semicolen delimited so I am just doing a split and removing the extra whitespace, that leaves the file in which my sample output was from and two other files which have their own patteren

    A short break down of the files could be put as:

    1. File 1: semicolen delimited
    2. File 2: fixed 02:10:33:15:19:10:3:18:6:4 & 02:98:11:9
    3. File 3: fixed 2:35:14:14:14:19:25:11:16
    4. File 4: fixed 2:20:20:2:11:8:10:10:03:3:4
    If it helps here is some sample output from file 3 and file 4 File 3:
    028088 00000005402.6000000000000.0000000 +000000.002011-04-19 12:00:00ALICIA MARIA LOPEZ BAZZOC00101893559 0213262 00000000000.0000000000000.0000000 +000000.002011-04-19 12:00:00INDEGOLF S.A. 00101893559 029052 00000002927.4000000000000.0000000 +000000.002011-04-19 12:00:00INDEGOLF (ALICIA LOPEZ) 02800898617 027550 00000000000.0000000000000.0000000 +000000.002011-04-19 12:00:00ALICIA LOPEZ (INDEGOLF)02855262166 029051 00000000000.0000000000000.0000000 +000000.002011-04-19 12:00:00ALICIA MARIA LOPEZ BAZZOC02800898617 028085 00000000000.0000000000000.0000000 +000000.002010-10-20 12:00:00INDEGOLF, S. A. 00101893559
    File 4:
    02CAFETERIA, ,MARI 0000000000000009822507+0009403.20201104150032018 +74313748210172100005 02RAMON, BRITO 0000000000000009817407+0108815.92201104150032018 +74413748210172100005 02EAST COAST CHART 0000000000000009851407+0002838.60201104150032211 +49915931210382100005 02INMOBILIARIA PAL 0000000000000009770507+0001345.18201104150029156 +70515250210202100005 02IGLESIA ESPIRITU 0000000000000009755607+0001031.74201104150032018 +60213748210172100005
    This is why I have it reading in the first file that way it loads the correct template for the data that I am parsing. I regret not giving output from the other files at the time I orginally posted, I wasn't even sure if my post would make it through the area that I live in was part of the area affected by the storms that went through the southeast U.S.