in reply to Revisiting array context
If you want to parallelize adding, you might be able to take advantage of parallelism within a single processor (perhaps it adds four bytes at a time if you use bit vectors) or grab more processors with threads or processes, the latter of which might be able to squeeze some more niced cpu time out. Perhaps Parallel::ForkManager, which suggests its use when downloading thousands of files, would be useful.
I suppose you would try to break the problem into pieces big enough to be worth starting a new process to get them, and adding up their results, or rather, just concatenating the bit vectors they return. The adding itself doesn't strike me as being very time consuming.
|
|---|