in reply to Re^5: Advice for optimizing lookup speed in gigantic hashes
in thread Advice for optimizing lookup speed in gigantic hashes

Thanks for all of your help!

I tried all of this out, initially testing it on 100 files of about 1MB each - inefficient I know, but it's the format my data originally came in (each file is a book). As you forsaw, threading for zillions of small files wasn't great, and your script actually increased my running time from 11 seconds to 17 seconds. However, when I took out threading but left the other stuff, it cut my running time in half to 5.5 seconds! Wonderful! I had grown accustomed to declaring variables separately more often than creating them on the fly, because I find it easier to read and maintain my code that way, and I never quite noticed how inefficient (at least in Perl) that is.

When I combined my input files from 100 1MB files to 4 25MB files, running the improved script without threading was reduced to 4 seconds, and putting threading back in only slowed it down a little: to 4.7 seconds - so I guess opening up the file takes a little longer that spell checking 25MB - or there is overhead for threading, or both. When I process the whole corpus I was planning on combining everything into about 1000 1GB files, and then I presume that the threading would only make things quicker. (Though it leaves me with a lingering, generic question that I was considering asking on Stack Overflow or somewhere cause it's not specifically Perl related: why not just have a single 1TB file if I'm only ever processing the whole thing? Not that a 1000 file opens takes up a large portion of an operation that takes several days, assuming that the time to open a filehandle is constant relative to the file size.)

Anyway, thanks so much for your help - you cut my execution time in half and pointed me in the right direction for future savings. Cheers!

  • Comment on Re^6: Advice for optimizing lookup speed in gigantic hashes

Replies are listed 'Best First'.
Re^7: Advice for optimizing lookup speed in gigantic hashes
by BrowserUk (Patriarch) on Aug 23, 2011 at 15:00 UTC
    or there is overhead for threading,

    Starting a thread is fairly expensive in perl, so I'm not surprised with your results for the 25MB files. I think you would indeed see an improvement if you tried 1000 1GB files.

    why not just have a single 1TB file if I'm only ever processing the whole thing?

    In your situation, given sufficient free disk space for a 1TB file not to be a problem to manipulate, that is exactly what I would do. It removes the directory searching problem completely and removes any need for the threading.

    But make sure that your other tools are up to date and capable of handling >4GB. (My installed version of tail isn't for example.)

    Anyway, thanks so much for your help - you cut my execution time in half and pointed me in the right direction for future savings. Cheers!

    Glad to have helped.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.