in reply to capture stdout and stderr from external command
I would arrange this problem by spawning n (say, 60...) workers, each of which withdraws an $args string from a serialized queue and executes that command, waiting for the command to complete. (Remember that you will therefore have n * 2 processes running at the same time: the workers and the commands, but also that you will never have any more than that, no matter how large the work-queue may be.)
After a worker has launched a command and the command has completed, the worker is also responsible for disposing of its output. Perhaps this involves the use of a mutual-exclusion object, which the worker is obliged to lock before it may transfer the output to the permanent store, and to release when it has finished doing so.
The workers, once launched, will survive until the queue has become exhausted, then they will all politely put away all of their toys, bid the world adieu, and die. (The main thread, after having launched the workers, really has nothing at all to do except to wait for its children to all die, which means that the job is done.)
The number of workers is a tunable parameter (60) which is entirely independent of the size of the queue (5,000) and therefore it can be freely adjusted. If the system begins to “thrash,” simply ease off on the throttle next time. With a little bit of experimentation you will quickly find the “sweet spot” for your particular system.