This file "b" is a pretty huge thing.
It is so big that I can't run your script to completion without exceeding my disk quota on the Linux machine that I have access to.
However, a typical line has 10 integers on it. On the machine I tested on, Perl can process 300,000 lines like that in 0.8 seconds. You are not using the "power of Perl". Perl combines the ease of use of a scripting language with the execution efficiency of a compiled language.
I love 'C' and I'm pretty good at assembly when I have to do it, BUT for just a few lines of code that can process more than 300K lines per second, I don't see the need for either.
#!/usr/bin/perl -w
use strict;
use Benchmark;
#file: jimmy.pl
timethese (5,
{ jimmy =>
q{
jimmy();
},
}
);
sub jimmy
{
open (IN, '<', "b") or die;
open (OUT, '>', "/dev/null") or die;
my $numlines =0;
while (<IN>)
{
next if /^\s+$/; # skip blank lines
my @words = split;
next if (@words <4); # something strange here
# happens just a very, very few times but
# there is a flaw in "b" file generation
+
print OUT @words[2,3], "\n";
$numlines++;
}
print "num lines read = $numlines\n";
};
__END__
[prompt]$ jimmy.pl
Benchmark: timing 5 iterations of jimmy...
num lines read = 299970
num lines read = 299970
num lines read = 299970
num lines read = 299970
num lines read = 299970
jimmy: 6 wallclock secs ( 6.29 usr + 0.05 sys = 6.34 CPU) @ 0
+.79/s (n=5)
Update: To do the simple math:
0.8 sec / 300K ~ x/1000K
x ~ 2.7 sec
That's approx 374,000 lines per second.
And that appears to me, to be very fast.
At that rate, 12 seconds could process > 4 billion
lines (not bytes).
Your benchmark is not realistic.
xxx@xxx:~/test/perl$ wc -l b
9999100 b
"b" is a file with 9,999,100 LINES in it or about 10 million.
How often do you actually process a single file containing 10 million lines?
I think that this is very rare!
If you want to benchmark Perl vs some awk thing, get realistic!
|