in reply to Re^4: search array for closest lower and higher number from another array
in thread search array for closest lower and higher number from another array

Well Browser, it seems that you were right and my line tracking idea didn't work out so well. Even with your more efficient code, it just took forever. Grep was very fast, but the line tracking combined with outputting certain lines of the file was extremely slow. What I ended up doing was a combination of grep (using the 50 line context option), and a foreach loop on the resulting array data. I parsed the data that way and it's very fast. On that slow P4/512MB machine I can parse a 1GB file for a fixed string in about 15 seconds. The same parsing done completely with a Perl file loop takes takes 15x as long.

I can only speculate but Perl needs to read the entire file line by line and search for the string, while grep obviously does something far more efficient to find the string. In the end, I haven't yet found anything close to the speed of grep to parse large 1GB files.

If anyone out there can find a faster way, I would be impressed. Perhaps there is a Perl module that can do very fast searches on files? AWK? I think that's comparable in terms of speed to grep, and I could also parse the data with it... hmmm..
  • Comment on Re^5: search array for closest lower and higher number from another array

Replies are listed 'Best First'.
Re^6: search array for closest lower and higher number from another array
by BrowserUk (Patriarch) on Feb 07, 2011 at 09:50 UTC
    parse a 1GB file for a fixed string in about 15 seconds. The same parsing done completely with a Perl file loop takes takes 15x as long.

    That suggests that Perl is taking 3 1/2 minutes to read through a 1GB file, which is a nonsense. It takes just 12 seconds on my machine which is nothing special in the disk department:

    [ 9:27:30.43] c:\test>perl -nle1 1GB.dat [ 9:27:42.87] c:\test>
    I can only speculate but Perl needs to read the entire file line by line and search for the string, while grep obviously does something far more efficient to find the string. In the end, I haven't yet found anything close to the speed of grep to parse large 1GB files.

    grep cannot avoid reading every line of the file, so the difference is not due to some special grep magic.

    A much more likely cause is the way you've coded your perl script.

    The simple script I posted above produces all the sections containing a particular search term from a >1GB file in 50 seconds without using anything fancy:

    #! perl -sw use strict; my $term = shift; my @section; while( <> ) { @section = () if /Header/; push @section, $_; if( /$term/ ) { print for @section; print while defined( $_ = <> ) and not /^Header/; } } __END__ 07/02/2011 09:33 1,090,025,317 886391.dat [ 9:47:22.80] c:\test>junk34 lima 886391.dat >junk.dat [ 9:48:12.67] c:\test>

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Yes I apologize it is 15 seconds for my grep method vs 50 seconds for doing it all in Perl as you've shown. I must have done something weird before to increase the time by so much.
        t is 15 seconds for my grep method

        ...but your grep method is only doing half of the work right?

        It gets you a big chunk of lines (context) either side of the line containing the search term, but it doesn't restrict them to just the enclosing section you are interested in. So, basically it is only doing half the job. Little wonder that it is quicker?

        However, this might be what you're looking for. It gets the time down to 12 seconds on my machine:

        #! perl -sw use strict; my $term = shift; $/ = 'Header'; while( <> ) { chomp; print 'Header', $_ if 1+index $_, $term; } __END__ [11:28:45.37] c:\test>junk34 lima 886391.dat >junk.dat [11:28:57.83] c:\test>

        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
      Here is the code I used just now to test them against each other. The additional operations which I added could certainly affect the benchmark but they are the same in both cases.
      use strict; use warnings; use Benchmark qw(cmpthese); my $fileName = "data1"; my $string = $ARGV[0] || die "\nNo Search String Provided.$!\n\n"; my $stringRegex = qr{$string}; my $headerRegex = qw{^Header}; ## Match Headers my $greenRegex = qr{(^Header).*)}; my $redRegex = qr{($string)}; cmpthese (1, { case1 => sub {&case1}, case2 => sub {&case2}, }, ); sub case1 { my $matches = `grep -h -F -C50 '$grepString' $fileName`; my $match; my $data; my $header; for (split /^/, $matches) { ## More efficient than array? (Scalar +s use less memory) if ($_ =~ $headerRegex) { ## Match headers if ($match) { $data =~ s/$greenRegex/\033[1;32m$1\033[m/o; ## Color + headers in GREEN $data =~ s/$redRegex/\033[1;31m$1\033[m/go; ## Color +string matches in RED $data =~ s/\n\n/\n/go; ## Remove double blanks lines print $data, "\n"; $match = 0; } $data = ""; $header = 1; } if (($header) && (0 > index ($_, $string))) { ## Match body o +f section $data .= $_; } elsif ($header) { ## Match search string $data .= $_; $match = 1; } } } sub case2 { open(my $fh, '<', "$fileName") or die $!; my $match; my $data; my $header; while (<$fh>) { if ($_ =~ $headerRegex) { if ($match) { $data =~ s/$greenRegex/\033[1;32m$1\033[m/o; $data =~ s/$redRegex/\033[1;31m$1\033[m/go; $data =~ s/\n\n/\n/go; print $data, "\n"; $match = 0; } $data = ""; $header = 1; } if (($header) && (0 > index ($_, $string))) { $data .= $_; } elsif ($header) { $data .= $_; $match = 1; } } } case1 4.79 s/iter case2 46.6 s/iter