in reply to Advice for optimizing lookup speed in gigantic hashes

I ran such a dictionary over a gigabyte of text on my computer, and it took five minutes to check every word. This works out to about 3.5 days on the whole 1 TB corpus,

It seems unlikely that your slow timings are due to the hash lookup. Far more likely to be down to how you are breaking your data into words?


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re: Advice for optimizing lookup speed in gigantic hashes

Replies are listed 'Best First'.
Re^2: Advice for optimizing lookup speed in gigantic hashes
by tobek (Novice) on Aug 23, 2011 at 02:47 UTC
    Whoops! That sure was ambiguous! It didn't mean that it took 5 minutes to check every word. I meant: checking every word, it took five minutes to process a gig of text. I guess I could have just said "It took five minutes to process a gig of text." I've updated my question =)

      By way of demonstration. Looking up 9000 words in a hash is 6X faster than splitting those same 9000 words out of a string:

      $s = 'the quick brown fox jumps over the lazy dog' x 1000;; ++$h{ $_ } for qw[ the quick brown fox jumps over the lazy];; cmpthese -1,{ a => q[ my @words = split ' ', $s;], b => q[ my $n=0; $h{$_} && ++$n for (qw[ the quick brown fox jumps over the lazy dog ]) x 1000; ] };; Rate a b a 95.4/s -- -85% b 644/s 575% --

      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
      It didn't mean that it took 5 minutes to check every word.

      I understood you.

      My point was that it inevitably will take longer to split a line of input into words than it does to look those words up in a hash. So, if the overall time taken is too long, you should be looking at how to split the words rather than how to look them up once you;ve split them.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Aha - sorry, too quick to jump to conclusions.

        Anyway, good point. I'm using split(/ /) to split on spaces. Here is some of the very skeleton of my code that does nothing but count misspellings:

        @files = glob "*"; foreach $file (@files) { open(INPUT, $file); while(<INPUT>) { @line = split(/ /); foreach (@line){ $errors++ unless $d{$_}; } } close(INPUT); }

        Are there faster ways to split strings? My files (as I have formatted them) have one sentence per line.

        I've also been playing around with this more, searching for the same results you just showed here. Going from doing only the hash look up in my innermost loop to doing nothing at all but look at the word shaved off only 20% of my time, so 80% is getting there. 20% still adds up to 16 hours or so for the whole corpus, but it does suggest that better savings lie elsewhere.

        Using /usr/bin/time I've found that the process's "user time" is only marginally less than the "elapsed (wall clock) time", which, if I understand correctly, means that very little time was spent waiting for I/O or for the CPU, so it must all be in this program - seems like the majority is in the splitting.

        If only I'd started my dissertation two months ago, and then I could not worry about this and just let my various pre-processing tasks run for a week and get on with it. Anyway, thanks for your help - I welcome any thoughts or suggestions, though that may lead somewhat off the topic of hashes. It's certainly the case that the hash lookups are bloody fast! If my math is correct, my mediocre machine can spell check about 180 billion words against a 100,000 word dictionary in 16 hours of CPU time. Yikes.

A reply falls below the community's threshold of quality. You may see it by logging in.