in reply to parallelising processes

/me nods...

This sounds like just the ticket for one of the fork-managers now being discussed.   A “pool” of, say, 8 different threads would thus be set up, and each one of them would do the same thing:

  1. Retrieve the next unit of work from a queue (which, of course, can be arbitrarily large).
  2. Process the unit of work and place the completion notification on another queue (or otherwise let the world know that this unit-of-work is done).
  3. Rinse and repeat.   (Eventually, the work peters out and all of the threads go dormant ... or, if you prefer, they graciously depart from the land of the living.)

The actual number of threads to be launched would, of course, be an adjustable parameter.   If you know that you have 8 processors or cores that probably don’t have anything better to do with their time, “8” would be a good starting point.   You could then do some careful experimenting and measuring to see what the “sweet spot” for your particular setup turns out to be.

This is such a common requirement that you don’t need to look forward to “writing” anything ... you will just “choose one.”