in reply to threading a perl script

There would be no point in having lots of threads per page of buffered I/O being fetched internally in the operating system - this would if anything be slower. The default buffered I/O page size for unix is 4096 bytes. But the processing of a 4096 byte buffer is apt to take less time than reading it from disk. So the threads would have to wait for each other anyway if they are allocated different pages from the same I/O stream (no gain there). Unless an extra pipe is inserted between the file I/O and the process, in which case the I/O system can make it's best effort to pump data down the pipe while your threads pick up 4K chunks and run with them to free the pipe more regularly for new data to replace it. But threads each seeking http://perldoc.perl.org/functions/seek.html to a different 4096 byte boundary of the shared filehandle need not work as expected. Forks would at least have the advantage of avoiding potential competition for internal I/O resources that I fear threads would encounter - somehow I feel that therefore forks offer a more hopeful scenario. But, given that there are more process overheads to forks, a trade-off multiplier (greater than 1) of how many 4K pages per fork will be optimal needs to be calculated. That might require experimentation.

One world, one people