http://qs1969.pair.com?node_id=800559


in reply to Google like scalability using perl?

Another way to do this is to run Apache and mod_perl on different machines, and split the large hash between them. The hash and code stays in memory with mod_perl.

You might also try memcached, as suggested above. The combination of memcached and mod_perl performs better than I expected.

For doing a large batch job by farming work out to lots of nodes, the trick is to use something like Amazon's Simple Queue Service, which keeps the work from overlapping. I don't know of a perl module that implements this, so if your code has this feature it would make a great CPAN module by itself.

It should work perfectly the first time! - toma