Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

I regularly do analysis by reading many large text files into Hashes and then doing the actual analysis. Since, everytime I run the scripts, I have to read the text files everytime sequentially and then execute the other subroutines. Is there a way to fork out my "text reading subroutines" so that I can read all text files as parallel processes, and in the end I can use all the loaded Hashes in my master script to do my analysis??
  • Comment on using hashes (or hash values) across FORKED processes

Replies are listed 'Best First'.
Re: using hashes (or hash values) across FORKED processes
by ozone (Friar) on Sep 20, 2001 at 16:56 UTC

    Take a look at POE. It's basically user-space threading, with an application kernel that allows you to do exactly what you're talking about, without the messiness of forking and shared memory...

    A bit of a steep learning curve, but one you won't look back from!

Re: using hashes (or hash values) across FORKED processes
by merlyn (Sage) on Sep 20, 2001 at 18:33 UTC
Re: using hashes (or hash values) across FORKED processes
by suaveant (Parson) on Sep 20, 2001 at 17:19 UTC
    Really, this is a database application (in my opinion). Fork off some processes that read the text into MySQL or Postgres (or maybe even GDBM) and then play with it then... select what you need out of the mix... but then, if you haven't used databases before, this may not be the quickest solution ;)

                    - Ant
                    - Some of my best work - Fish Dinner