in reply to Speeding up data lookups

Running it in mod_perl or PPerl with no forking should speed things up a lot. Even just running it all from one script would be better than all that forking. Reading all your data into RAM can be very fast, but puts limits on how much you can scale as more data gets added. Putting it into a format like a dbm file where you can efficiently access individual records works well for some things, and doesn't need to read the whole thing into memory. It is somewhat faster than MySQL.

Your understanding about copy-on-write shared memory is correct, but forking is not always useful. It's most effective in situations where you have a lot of I/O waiting.

Replies are listed 'Best First'.
Re^2: Speeding up data lookups
by suaveant (Parson) on Sep 19, 2005 at 19:26 UTC
    I don't think I was clear enough on that point. The idea of forking was to run maybe as many as 8 children that ran down a list of holdings files processing them, the idea being to take advantage of the many processors of the system... I wasn't planning on making a thousand children or anything like that. In this case, with some data processing happening in perl, this should improve things.

                    - Ant
                    - Some of my best work - (1 2 3)