in reply to Re^3: Advice for optimizing lookup speed in gigantic hashes
in thread Advice for optimizing lookup speed in gigantic hashes

Aha - sorry, too quick to jump to conclusions.

Anyway, good point. I'm using split(/ /) to split on spaces. Here is some of the very skeleton of my code that does nothing but count misspellings:

@files = glob "*"; foreach $file (@files) { open(INPUT, $file); while(<INPUT>) { @line = split(/ /); foreach (@line){ $errors++ unless $d{$_}; } } close(INPUT); }

Are there faster ways to split strings? My files (as I have formatted them) have one sentence per line.

I've also been playing around with this more, searching for the same results you just showed here. Going from doing only the hash look up in my innermost loop to doing nothing at all but look at the word shaved off only 20% of my time, so 80% is getting there. 20% still adds up to 16 hours or so for the whole corpus, but it does suggest that better savings lie elsewhere.

Using /usr/bin/time I've found that the process's "user time" is only marginally less than the "elapsed (wall clock) time", which, if I understand correctly, means that very little time was spent waiting for I/O or for the CPU, so it must all be in this program - seems like the majority is in the splitting.

If only I'd started my dissertation two months ago, and then I could not worry about this and just let my various pre-processing tasks run for a week and get on with it. Anyway, thanks for your help - I welcome any thoughts or suggestions, though that may lead somewhat off the topic of hashes. It's certainly the case that the hash lookups are bloody fast! If my math is correct, my mediocre machine can spell check about 180 billion words against a 100,000 word dictionary in 16 hours of CPU time. Yikes.

Replies are listed 'Best First'.
Re^5: Advice for optimizing lookup speed in gigantic hashes
by BrowserUk (Patriarch) on Aug 23, 2011 at 04:41 UTC

    Try this:

    #! perl -slw use strict; use threads; my @words = do{ local( @ARGV, $/ ) = 'your.dictionary'; <> }; chomp @words; my %dict; undef @dict{ @words }; my $errors = 0; my $savedfh; my @files = glob "*"; open $savedfh, '<', shift( @files ) or die $!; for my $file ( @files ) { my $fh = $savedfh; my $thread = async{ open my $fh, '<', $file or die $!; $fh }; map +( exists $dict{$_} || ++$errors ), split while <$fh>; $savedfh = $thread->join; } print $errors;

    There are three attempted optimisations going on here:

    1. By amalgamating the reading from a file; splitting into words; looking up the hash; and counting; into a single line, it avoids two extra levels of scope and the building of the array.
    2. Using my variable which are quicker than globals.

      It also allows strict & warnings which I prefer.

    3. Using threads to overlap the opening of the next file whilst processing the last.

      This is probably where most of your time is going. The initial lookup of a file, especially if it is in a large directory is often quite expensive in terms of time.

      This will only be effective if you are processing a few large files rather than zillions of small ones.

    Try it and see what if any benefit you derive. If it is effective and you want to understand more, just ask.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      Thanks for all of your help!

      I tried all of this out, initially testing it on 100 files of about 1MB each - inefficient I know, but it's the format my data originally came in (each file is a book). As you forsaw, threading for zillions of small files wasn't great, and your script actually increased my running time from 11 seconds to 17 seconds. However, when I took out threading but left the other stuff, it cut my running time in half to 5.5 seconds! Wonderful! I had grown accustomed to declaring variables separately more often than creating them on the fly, because I find it easier to read and maintain my code that way, and I never quite noticed how inefficient (at least in Perl) that is.

      When I combined my input files from 100 1MB files to 4 25MB files, running the improved script without threading was reduced to 4 seconds, and putting threading back in only slowed it down a little: to 4.7 seconds - so I guess opening up the file takes a little longer that spell checking 25MB - or there is overhead for threading, or both. When I process the whole corpus I was planning on combining everything into about 1000 1GB files, and then I presume that the threading would only make things quicker. (Though it leaves me with a lingering, generic question that I was considering asking on Stack Overflow or somewhere cause it's not specifically Perl related: why not just have a single 1TB file if I'm only ever processing the whole thing? Not that a 1000 file opens takes up a large portion of an operation that takes several days, assuming that the time to open a filehandle is constant relative to the file size.)

      Anyway, thanks so much for your help - you cut my execution time in half and pointed me in the right direction for future savings. Cheers!

        or there is overhead for threading,

        Starting a thread is fairly expensive in perl, so I'm not surprised with your results for the 25MB files. I think you would indeed see an improvement if you tried 1000 1GB files.

        why not just have a single 1TB file if I'm only ever processing the whole thing?

        In your situation, given sufficient free disk space for a 1TB file not to be a problem to manipulate, that is exactly what I would do. It removes the directory searching problem completely and removes any need for the threading.

        But make sure that your other tools are up to date and capable of handling >4GB. (My installed version of tail isn't for example.)

        Anyway, thanks so much for your help - you cut my execution time in half and pointed me in the right direction for future savings. Cheers!

        Glad to have helped.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.