in reply to Child Process as a Function

Do it the same way that you did it in Korn. "Having to run through data and process it in chunks" does not mean nor does it imply that it would be beneficial, much less "faster," to process those chunks in parallel. Just write a simple Perl script that takes two input parameters indicating which slice of the data (start and end) you want to process "this time." Run that simple Perl command one at a time, or use "&" in Unix/Linux shell to launch multiple jobs =if= you can plainly see that two jobs finish in-parallel in noticeably less than twice the time. Since most data processing is I/O-constrained, multithreading has limited use and should not be reached-for instinctively.