The reason I'm doing that is cause I've read, on node Using fork with DBI to create simultaneous db connections., that DBI hates forking and you must use a new DBI connection for each child. Else you'll get DBI stepping on it's feet and causing unpredictable results.
Initial thought of creating the script was to run a child for each system; each system runs aprox 45 seconds upto 1 minu 30 seconds before it finishes.
for($i = 0; $i < $numchilds; $i++) {fork; connect; while (get_task) {run_task}; disconnect; exit;}
You're suggesting taking the pool of tasks, split them into 5 arrays and then run them?
My tasks is updating a load of 100+ server stats in a database. I'll have to use some kind of splitting to split the array of hosts into 5 arrays (as evenly as possible) for splitting into individual forks.
That's a possiblity...
Thanks, I'll try that.
-- philip
We put the 'K' in kwality!
In reply to Re^2: Parallel::ForkManager eating up Resources?
by guice
in thread Parallel::ForkManager eating up Resources?
by guice
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |