parse a 1GB file for a fixed string in about 15 seconds. The same parsing done completely with a Perl file loop takes takes 15x as long.
That suggests that Perl is taking 3 1/2 minutes to read through a 1GB file, which is a nonsense. It takes just 12 seconds on my machine which is nothing special in the disk department:
[ 9:27:30.43] c:\test>perl -nle1 1GB.dat [ 9:27:42.87] c:\test>
I can only speculate but Perl needs to read the entire file line by line and search for the string, while grep obviously does something far more efficient to find the string. In the end, I haven't yet found anything close to the speed of grep to parse large 1GB files.
grep cannot avoid reading every line of the file, so the difference is not due to some special grep magic.
A much more likely cause is the way you've coded your perl script.
The simple script I posted above produces all the sections containing a particular search term from a >1GB file in 50 seconds without using anything fancy:
#! perl -sw use strict; my $term = shift; my @section; while( <> ) { @section = () if /Header/; push @section, $_; if( /$term/ ) { print for @section; print while defined( $_ = <> ) and not /^Header/; } } __END__ 07/02/2011 09:33 1,090,025,317 886391.dat [ 9:47:22.80] c:\test>junk34 lima 886391.dat >junk.dat [ 9:48:12.67] c:\test>
In reply to Re^6: search array for closest lower and higher number from another array
by BrowserUk
in thread search array for closest lower and higher number from another array
by bigbot
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |