Sod's Law guarantees that it will always be the first, middle and last chunks of the files that take the longest, so you'll still end up with 13 cpus standing idle while those 3 run on for hours trying to catch up.
As far as I could see, the speed of chunks is totally random. Sometimes, it is only one part (let's say part 3 of 16) that keeps running whereas the remaining 15 CPUs are idle. The reason I don't want to split the files further is because I have to deal with enough files already, and I would rather avoid the confusion. Also, the start of each process loads a huge file into memory (about 10GB), which takes some time in itself, and I would like to minimize that time as well. So I am not sure if I can follow this path.
In reply to Re^2: Wait for individual sub processes
by crackerjack.tej
in thread Wait for individual sub processes [SOLVED]
by crackerjack.tej
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |