Programs not waiting for user input are dominated by CPU or I/O. If your processing is CPU bound, then splitting up the among the available processors can help. Similarly, if your processing is I/O bound, splitting your work over multiple I/O devices can speed things up significantly. However if your task is I/O bound and you can't split the work over multiple I/O devices, then splitting the work among several processes can make things go even slower.
Keep in mind that I/O devices include the network as well as disk drives. I've had a couple jobs where file processing was limited by network bandwidth (the disk drives were in a SAN), so we overcame the bottleneck by splitting the job over several computers.
I suggest you first find out what your bottleneck is, and then think about an appropriate strategy to split the work. In the (very rare) case that the non-dominated resource is still heavily used (e.g. you're using 100% of the I/O and 90% of the CPU), then splitting the work up may not gain you much, as you'll almost immediately hit the next bottleneck. Again, this isn't a common case.
...roboticus
When your only tool is a hammer, all problems look like your thumb.
In reply to Re: To Fork or Not to Fork. Tis a question for PerlMonks
by roboticus
in thread To Fork or Not to Fork. Tis a question for PerlMonks
by pimperator
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |