in reply to Re: Advice for optimizing lookup speed in gigantic hashes
in thread Advice for optimizing lookup speed in gigantic hashes

Whoops! That sure was ambiguous! It didn't mean that it took 5 minutes to check every word. I meant: checking every word, it took five minutes to process a gig of text. I guess I could have just said "It took five minutes to process a gig of text." I've updated my question =)
  • Comment on Re^2: Advice for optimizing lookup speed in gigantic hashes

Replies are listed 'Best First'.
Re^3: Advice for optimizing lookup speed in gigantic hashes
by BrowserUk (Patriarch) on Aug 23, 2011 at 03:27 UTC

    By way of demonstration. Looking up 9000 words in a hash is 6X faster than splitting those same 9000 words out of a string:

    $s = 'the quick brown fox jumps over the lazy dog' x 1000;; ++$h{ $_ } for qw[ the quick brown fox jumps over the lazy];; cmpthese -1,{ a => q[ my @words = split ' ', $s;], b => q[ my $n=0; $h{$_} && ++$n for (qw[ the quick brown fox jumps over the lazy dog ]) x 1000; ] };; Rate a b a 95.4/s -- -85% b 644/s 575% --

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
Re^3: Advice for optimizing lookup speed in gigantic hashes
by BrowserUk (Patriarch) on Aug 23, 2011 at 03:10 UTC
    It didn't mean that it took 5 minutes to check every word.

    I understood you.

    My point was that it inevitably will take longer to split a line of input into words than it does to look those words up in a hash. So, if the overall time taken is too long, you should be looking at how to split the words rather than how to look them up once you;ve split them.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      Aha - sorry, too quick to jump to conclusions.

      Anyway, good point. I'm using split(/ /) to split on spaces. Here is some of the very skeleton of my code that does nothing but count misspellings:

      @files = glob "*"; foreach $file (@files) { open(INPUT, $file); while(<INPUT>) { @line = split(/ /); foreach (@line){ $errors++ unless $d{$_}; } } close(INPUT); }

      Are there faster ways to split strings? My files (as I have formatted them) have one sentence per line.

      I've also been playing around with this more, searching for the same results you just showed here. Going from doing only the hash look up in my innermost loop to doing nothing at all but look at the word shaved off only 20% of my time, so 80% is getting there. 20% still adds up to 16 hours or so for the whole corpus, but it does suggest that better savings lie elsewhere.

      Using /usr/bin/time I've found that the process's "user time" is only marginally less than the "elapsed (wall clock) time", which, if I understand correctly, means that very little time was spent waiting for I/O or for the CPU, so it must all be in this program - seems like the majority is in the splitting.

      If only I'd started my dissertation two months ago, and then I could not worry about this and just let my various pre-processing tasks run for a week and get on with it. Anyway, thanks for your help - I welcome any thoughts or suggestions, though that may lead somewhat off the topic of hashes. It's certainly the case that the hash lookups are bloody fast! If my math is correct, my mediocre machine can spell check about 180 billion words against a 100,000 word dictionary in 16 hours of CPU time. Yikes.

        Try this:

        #! perl -slw use strict; use threads; my @words = do{ local( @ARGV, $/ ) = 'your.dictionary'; <> }; chomp @words; my %dict; undef @dict{ @words }; my $errors = 0; my $savedfh; my @files = glob "*"; open $savedfh, '<', shift( @files ) or die $!; for my $file ( @files ) { my $fh = $savedfh; my $thread = async{ open my $fh, '<', $file or die $!; $fh }; map +( exists $dict{$_} || ++$errors ), split while <$fh>; $savedfh = $thread->join; } print $errors;

        There are three attempted optimisations going on here:

        1. By amalgamating the reading from a file; splitting into words; looking up the hash; and counting; into a single line, it avoids two extra levels of scope and the building of the array.
        2. Using my variable which are quicker than globals.

          It also allows strict & warnings which I prefer.

        3. Using threads to overlap the opening of the next file whilst processing the last.

          This is probably where most of your time is going. The initial lookup of a file, especially if it is in a large directory is often quite expensive in terms of time.

          This will only be effective if you are processing a few large files rather than zillions of small ones.

        Try it and see what if any benefit you derive. If it is effective and you want to understand more, just ask.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.