in reply to Re^6: Advice for optimizing lookup speed in gigantic hashes
in thread Advice for optimizing lookup speed in gigantic hashes

or there is overhead for threading,

Starting a thread is fairly expensive in perl, so I'm not surprised with your results for the 25MB files. I think you would indeed see an improvement if you tried 1000 1GB files.

why not just have a single 1TB file if I'm only ever processing the whole thing?

In your situation, given sufficient free disk space for a 1TB file not to be a problem to manipulate, that is exactly what I would do. It removes the directory searching problem completely and removes any need for the threading.

But make sure that your other tools are up to date and capable of handling >4GB. (My installed version of tail isn't for example.)

Anyway, thanks so much for your help - you cut my execution time in half and pointed me in the right direction for future savings. Cheers!

Glad to have helped.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^7: Advice for optimizing lookup speed in gigantic hashes