in reply to Challenge: Construct an unpack string

Assuming this is being extracted from a single text string, and that the positions overlap as shown,

Position: 0123456789 0123456789 Position: 0123456789 name1: .......... name2: ... name3: ..............
then, given the string '0123456789abcdefghijklmnopqrstuvwxyz', the following would seem to extract the data as required:
my ($name1, $name3, $name2) = unpack(q{a10 a14 X16 a3), $str);

Test:

perl -le '$str = q{0123456789abcdefghijklmnopqrstuvwxyz}; my ($name1 +, $name3, $name2) = unpack(q{a10 a14 X16 a3}, $str); print $name1; pr +int $name2; print $name3;'

Results:

0123456789 89a abcdefghijklmn

Does that take conform to what you expected?

Replies are listed 'Best First'.
Re^2: Challenge: Construct an unpack string
by holli (Abbot) on Sep 27, 2006 at 11:56 UTC
    No. It should be
    0123456789 89a bcdefghijklmn
    My solution looks like the following (data structure is slightly modified). Also there is a benchmark against Browser_UK's solution:
    use warnings; use strict; use Benchmark; my $text = "0123456789abcdefghijklmn"; my $unpack = ""; my $position = 0; my @fields = ( { name => 'name1', start => 0, len=> 10 }, { name => 'name2', start => 8, len=> 3 }, { name => 'name3', start => 11, len=> 14 }, ); for ( @fields ) { $unpack .= $_->{start} < $position ? 'X' . ( $position - $_->{st +art} ) : $_->{start} > $position ? 'x' . ( $_->{start} - $positi +on ) : ''; $unpack .= 'A' . $_->{len}; $position = $_->{start}+$_->{len}; } print "$unpack\n"; print join ("*", unpack ($unpack, $text)), "\n"; timethese ( 1_000_000, { 'unpack' => sub { my @a = unpack ($unpack, $text); }, 'substr' => sub { my @a = map { substr $text, $_->{start}, $_->{len} } @fie +lds; } }, );
    Results:
    A10X2A3A14 0123456789*89a*bcdefghijklmn Benchmark: timing 1000000 iterations of substr, unpack... substr: 12 wallclock secs (12.80 usr + -0.02 sys = 12.78 CPU) @ 78 +241.14/s (n=1000000) unpack: 9 wallclock secs ( 9.00 usr + 0.00 sys = 9.00 CPU) @ 11 +1098.77/s (n=1000000)
    So it looks like the unpack version is faster, as my guts have said. The cost of assembling the unpack string can be considered irrelevant, because in a real world it would happen only once per file.


    holli, /regexed monk/

      Here's another benchmark.

      use warnings; use strict; use Benchmark qw[ cmpthese ]; my @fields = ( { name => 'name1', start => 0, len=> 4 }, { name => 'name2', start => 3, len=> 7 }, { name => 'name3', start => 8, len=> 3 }, { name => 'name4', start => 0, len=> 10 }, { name => 'name5', start => 5, len=> 20 }, { name => 'name6', start => 11, len=> 14 }, { name => 'name7', start => 9, len=> 13 }, { name => 'name8', start => 2, len=> 2 }, { name => 'name9', start => 1, len=> 10 }, ); open FH, '<', $ARGV[ 0 ] or die $!; cmpthese -3, { 'unpack' => sub { my $unpack = ""; my $position = 0; for ( @fields ) { $unpack .= $_->{start} < $position ? 'X' . ( $position - $_- +>{start} ) : $_->{start} > $position ? 'x' . ( $_->{start} - $po +sition ) : ''; $unpack .= 'A' . $_->{len}; $position = $_->{start}+$_->{len}; } seek FH, 0, 0; while( my $text = <FH> ) { my @a = unpack $unpack, $text; } }, 'substr' => sub { seek FH, 0, 0; while( my $text = <FH> ) { my @a = map{ substr $text, $_->{start}, $_->{len} } @field +s; } } }; close FH; __END__ C:\test>for /l %i in (1,1,6) do holli data\alpha.1e%i C:\test>holli data\alpha.1e1 Rate unpack substr unpack 28603/s -- -80% substr 140302/s 391% -- C:\test>holli data\alpha.1e2 Rate substr unpack substr 400/s -- -26% unpack 540/s 35% -- C:\test>holli data\alpha.1e3 Rate substr unpack substr 40.3/s -- -27% unpack 54.9/s 36% -- C:\test>holli data\alpha.1e4 Rate substr unpack substr 4.06/s -- -28% unpack 5.61/s 38% -- C:\test>holli data\alpha.1e5 (warning: too few iterations for a reliable count) (warning: too few iterations for a reliable count) s/iter substr unpack substr 2.47 -- -27% unpack 1.80 37% -- C:\test>holli data\alpha.1e6 (warning: too few iterations for a reliable count) (warning: too few iterations for a reliable count) s/iter substr unpack substr 25.3 -- -29% unpack 18.0 40% --

      If your files are bigger than a few lines, the unpack version starts winning quite quickly.

      But by nowhere near as much as the other benchmarks would have you believe because once you factor in reading each line from the file, the IO time which is constant for both approaches, the parsing time becomes much less significant. Worth having, but much less than you thought.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.