After re-reading your post several times, it appears your problem is the expanding size of the '%dataset' hash. One solution is to save the assembled '%dataset' hash to disk and then clear the hash before forking the children.
As each child asks for more work, the 'boss' reads the disk for the next unit of work and passes that to the child for processing. You could do this with a database, but that just adds overhead that isn't needed. (Note: I'm assuming the hash(now file) is deleted after the boss completes the run.) Also, you don't say anything about the contents of the data, so you may want to include the size of each unit of work on disk, and read just that amount. Perl takes care of this for you in the hash.
This solution is very simple and would be easy to test/verify. Hope this helps!
Regards...Ed
"Well done is better than well said." - Benjamin Franklin
In reply to Re: Design advice: Classic boss/worker program memory consumption
by flexvault
in thread Design advice: Classic boss/worker program memory consumption
by shadrack
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |