![]() |
|
Welcome to the Monastery | |
PerlMonks |
comment on |
( #3333=superdoc: print w/replies, xml ) | Need Help?? |
Another way to do this is to run Apache and mod_perl on different machines, and split the large hash between them. The hash and code stays in memory with mod_perl.
You might also try memcached, as suggested above. The combination of memcached and mod_perl performs better than I expected. For doing a large batch job by farming work out to lots of nodes, the trick is to use something like Amazon's Simple Queue Service, which keeps the work from overlapping. I don't know of a perl module that implements this, so if your code has this feature it would make a great CPAN module by itself.
It should work perfectly the first time! - toma
In reply to Re: Google like scalability using perl?
by toma
|
|