in reply to Re^2: Simple line parse question
in thread Simple line parse question

You can generate the input yourself. For example:

xxx@xxx:~/test/perl$ seq 100 1000000 | perl -ne 'print int(rand($_)), +"\n"' | xargs -n10 echo > a xxx@xxx:~/test/perl$ wc -l a 99991 a xxx@xxx:~/test/perl$ for i in {1..100}; do cat a; done > b xxx@xxx:~/test/perl$ wc -l b 9999100 b xxx@xxx:~/test/perl$ cat b | time -p awk '{print $3$4}' > /dev/null real 8.78 user 7.89 sys 0.38 xxx@xxx:~/test/perl$ cat b | time -p perl -ne 'print join("", (split(" + ", $_, 5))[2,3]),"\n";' > /dev/null real 13.78 user 12.93 sys 0.32

Replies are listed 'Best First'.
Re^4: Simple line parse question
by Marshall (Canon) on Aug 09, 2010 at 02:24 UTC
    This file "b" is a pretty huge thing.
    It is so big that I can't run your script to completion without exceeding my disk quota on the Linux machine that I have access to.

    However, a typical line has 10 integers on it. On the machine I tested on, Perl can process 300,000 lines like that in 0.8 seconds. You are not using the "power of Perl". Perl combines the ease of use of a scripting language with the execution efficiency of a compiled language.

    I love 'C' and I'm pretty good at assembly when I have to do it, BUT for just a few lines of code that can process more than 300K lines per second, I don't see the need for either.

    #!/usr/bin/perl -w use strict; use Benchmark; #file: jimmy.pl timethese (5, { jimmy => q{ jimmy(); }, } ); sub jimmy { open (IN, '<', "b") or die; open (OUT, '>', "/dev/null") or die; my $numlines =0; while (<IN>) { next if /^\s+$/; # skip blank lines my @words = split; next if (@words <4); # something strange here # happens just a very, very few times but # there is a flaw in "b" file generation + print OUT @words[2,3], "\n"; $numlines++; } print "num lines read = $numlines\n"; }; __END__ [prompt]$ jimmy.pl Benchmark: timing 5 iterations of jimmy... num lines read = 299970 num lines read = 299970 num lines read = 299970 num lines read = 299970 num lines read = 299970 jimmy: 6 wallclock secs ( 6.29 usr + 0.05 sys = 6.34 CPU) @ 0 +.79/s (n=5)
    Update: To do the simple math:
    0.8 sec / 300K ~ x/1000K
    x ~ 2.7 sec
    That's approx 374,000 lines per second.
    And that appears to me, to be very fast.
    At that rate, 12 seconds could process > 4 billion lines (not bytes).

    Your benchmark is not realistic.
    xxx@xxx:~/test/perl$ wc -l b
    9999100 b
    "b" is a file with 9,999,100 LINES in it or about 10 million.
    How often do you actually process a single file containing 10 million lines?
    I think that this is very rare!

    If you want to benchmark Perl vs some awk thing, get realistic!