OK.. To test whether dividing the input further will help my case, I ran the tool (to clarify, this tool is not mine, not written in Perl, compiled, and not open source) with varying sizes of input (1, 10, 100, 1000 and 10000 lines).
Ignoring the startup time (as I said, it first loads a big file into memory, but I assumed we can take care of this somehow), the tool itself provides some stats on the time it spends processing the file. Here is the time it took for each input:
1.txt: 3.06 seconds 10.txt: 16.11 seconds 100.txt: 54.12 seconds 1000.txt: 7 min 14 seconds (434 seconds) 10000.txt: 69 min 44 seconds (4184 seconds)
So, I guess, splitting the files is only advantageous if I can process them simultaneously. A very interesting suggestion, nevertheless. Thank you :)
In reply to Re^2: Wait for individual sub processes
by crackerjack.tej
in thread Wait for individual sub processes [SOLVED]
by crackerjack.tej
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |