in reply to Reading nested records in binary data

It really depends upon what you mean by efficiency?

If notational convenience is your goal, then unpack wins hands down; but if execution efficiency is required, you might reconsider the use of substr. Some results from a benchmark:

C:\test>535539 Test data: 1000 records Rate unpack substr lrefs unpack 1.77/s -- -33% -99% substr 2.65/s 50% -- -99% lrefs 196/s 10974% 7289% --

As you can see, the time taken to re-parse the template each time around the loop, means that substr is 50% quicker that unpack.

Even with substr, you are still having to re-divide the buffer and extract the substrings to an array. The method labelled 'lrefs' avoids doing this with the result that it runs two orders of magnitude more quickly.

It does this by pre-allocating a buffer to the record size and then making an array of references into that buffer. It then reads each new record into the pre-subdivided buffer and uses the references to extract the subrecords:

my $buffer = chr(0) x 4800; ## Pre-partition the data using lvalue refs my $header = \substr $buffer, 0, 300; my @subs = map{ \substr( $buffer, 300 + $_ *18, 18 ) } 0 .. 249; while( read( $fhTest, $buffer, 4800, 0 ) == 4800 ) { ... }

The removal of this 'invariant code' from the loop, combined with avoiding the need to deallocated/reallocate the array elements for the subrecords each time around are what makes this so efficient.

Of course, if you need to store the data for later use outside the loop (by building a hash say), then you'll need to allcate the space to store it and some of the efficiency gains will be lost:

C:\test>535539 Test data: 1000 records Rate unpack substr lrefs unpack 1.48/s -- -17% -40% substr 1.78/s 20% -- -28% lrefs 2.46/s 66% 38% --

Even so, 40% gain over substr and 66% over unpack is worth having if you are processing a large amount of data, even with the (slight) decrease in notational convenience.

The benchmark code:

#! perl -slw use strict; use Benchmark qw[ cmpthese ]; our $N ||= 1000; ## Gen some test data into ramfile to exclude io buffering from the be +nchmark our $testData; open our $fhTest, '>', \$testData; printf $fhTest '%05d%295s%4500s', $_, 'h' x 295, join( '', '000000000000000001' .. '000000000000000250' ) for 1 .. $N; close $fhTest; printf "Test data: %.f records\n", length( $testData ) /4800; open $fhTest, '<', \$testData; cmpthese -3, { lrefs => q[ my $buffer = chr(0) x 4800; ## Pre-partition the data using lvalue refs my $header = \substr $buffer, 0, 300; my @subs = map{ \substr( $buffer, 300 + $_ *18, 18 ) } 0 .. 24 +9; seek $fhTest, 0, 0; while( read( $fhTest, $buffer, 4800, 0 ) == 4800 ) { # printf "H:%-20.20s first: %s last: %s\n", # $$header, ${ $subs[0] }, ${ $subs[249] }; # $hash{ $$header } = [ map{ $$_ } @subs ]; } ], substr => q[ my $buffer; my %hash; seek $fhTest, 0, 0; while( read( $fhTest, $buffer, 4800, 0 ) == 4800 ) { my $header = substr $buffer, 0, 300; my @subs; $subs[ $_ ] = substr $buffer, 300 + $_ *18, 18 for 0 .. 24 +9; # printf "H:%-20.20s first: %s last: %s\n", # $header, @subs[ 0, 249 ]; # $hash{ $header } = \@subs; } ], unpack => q[ my $buffer; my %hash; seek $fhTest, 0, 0; while( read( $fhTest, $buffer, 4800, 0 ) == 4800 ) { my( $header, @subs ) = unpack 'a300(a18)250', $buffer; # printf "H:%-20.20s first: %s last: %s\n", # $header, @subs[ 0, 249 ]; # $hash{ $header } = \@subs; } ], };

Uncomment the hash assignments to see the reduced performance if you need the data outside the loop, and the print statements to convince yourself that they all do the same thing.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.