in reply to Re^2: search array for closest lower and higher number from another array
in thread search array for closest lower and higher number from another array

Yeah I've tried this on two different systems. One a P4 Ubuntu machine (this one), and another an enterprise Redhat server. Both of them showed similar results.

Regardless do you have an idea how to do the array comparison without using a module? I can't install a module either. If I can just find an efficient way to run through the first array and get the two closest numbers from the other array then I can speed test the complete script using grep and your solution. And both of the arrays are already sorted

  • Comment on Re^3: search array for closest lower and higher number from another array

Replies are listed 'Best First'.
Re^4: search array for closest lower and higher number from another array
by BrowserUk (Patriarch) on Feb 05, 2011 at 16:25 UTC

    Okay. If both arrays are sorted, you don't need a binary search.

    You just move through them in parallel and you're done in a single pass:

    #! perl -slw use strict; my @matches = map $_ * 10, 1 .. 10; my @headers = map $_ * 5 +2, 0 .. 20; my $hdr = 0; for my $match ( @matches ) { $hdr++ while $headers[ $hdr + 1 ] < $match; printf "match at %d in section from %d - %d\n", $match, $headers[ $hdr ], $headers[ $hdr + 1 ] -1; } __END__ [16:20:40.99] c:\test>junk36 match at 10 in section from 7 - 11 match at 20 in section from 17 - 21 match at 30 in section from 27 - 31 match at 40 in section from 37 - 41 match at 50 in section from 47 - 51 match at 60 in section from 57 - 61 match at 70 in section from 67 - 71 match at 80 in section from 77 - 81 match at 90 in section from 87 - 91 match at 100 in section from 97 - 101

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Excellent! That looks way more efficient. I will let you know how it works out.
      Well Browser, it seems that you were right and my line tracking idea didn't work out so well. Even with your more efficient code, it just took forever. Grep was very fast, but the line tracking combined with outputting certain lines of the file was extremely slow. What I ended up doing was a combination of grep (using the 50 line context option), and a foreach loop on the resulting array data. I parsed the data that way and it's very fast. On that slow P4/512MB machine I can parse a 1GB file for a fixed string in about 15 seconds. The same parsing done completely with a Perl file loop takes takes 15x as long.

      I can only speculate but Perl needs to read the entire file line by line and search for the string, while grep obviously does something far more efficient to find the string. In the end, I haven't yet found anything close to the speed of grep to parse large 1GB files.

      If anyone out there can find a faster way, I would be impressed. Perhaps there is a Perl module that can do very fast searches on files? AWK? I think that's comparable in terms of speed to grep, and I could also parse the data with it... hmmm..
        parse a 1GB file for a fixed string in about 15 seconds. The same parsing done completely with a Perl file loop takes takes 15x as long.

        That suggests that Perl is taking 3 1/2 minutes to read through a 1GB file, which is a nonsense. It takes just 12 seconds on my machine which is nothing special in the disk department:

        [ 9:27:30.43] c:\test>perl -nle1 1GB.dat [ 9:27:42.87] c:\test>
        I can only speculate but Perl needs to read the entire file line by line and search for the string, while grep obviously does something far more efficient to find the string. In the end, I haven't yet found anything close to the speed of grep to parse large 1GB files.

        grep cannot avoid reading every line of the file, so the difference is not due to some special grep magic.

        A much more likely cause is the way you've coded your perl script.

        The simple script I posted above produces all the sections containing a particular search term from a >1GB file in 50 seconds without using anything fancy:

        #! perl -sw use strict; my $term = shift; my @section; while( <> ) { @section = () if /Header/; push @section, $_; if( /$term/ ) { print for @section; print while defined( $_ = <> ) and not /^Header/; } } __END__ 07/02/2011 09:33 1,090,025,317 886391.dat [ 9:47:22.80] c:\test>junk34 lima 886391.dat >junk.dat [ 9:48:12.67] c:\test>

        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re^4: search array for closest lower and higher number from another array
by Corion (Patriarch) on Feb 05, 2011 at 15:36 UTC