in reply to Google like scalability using perl?
Another way to do this is to run Apache and mod_perl on different machines, and split the large hash between them. The hash and code stays in memory with mod_perl.
You might also try memcached, as suggested above. The combination of memcached and mod_perl performs better than I expected.
For doing a large batch job by farming work out to lots of nodes, the trick is to use something like Amazon's Simple Queue Service, which keeps the work from overlapping. I don't know of a perl module that implements this, so if your code has this feature it would make a great CPAN module by itself.
It should work perfectly the first time! - toma
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Google like scalability using perl?
by dpavlin (Friar) on Oct 14, 2009 at 22:26 UTC |
In Section
Meditations