The problem I'm running into is that I'm reading the database into a hash, and dumping the hash back to the database at the end of the script. The hash copies in each fork don't update the parent's hash, and I'm left with the hash as it was at load time being re-written. Here is some dumbed-down code to illustrate:
#!/usr/bin/perl -w use strict; use Data::Dumper; use Parallel::ForkManager; my $pm = new Parallel::ForkManager(10); $pm->run_on_finish ( sub { my (undef, $exit_code, $ident) = @_; $update_flag = 1 if $exit_code; } ); my %sess_hist = (); &load_database; for my $session (keys %{$hash{Session}}) { my $pid = $pm->start($session) and next; &run_session($session) if &check_overdue($session); # &run_session updates %sess_hist with new stats else { exit (0); } $pm->finish($session); } $pm->wait_all_children; &save_database if $update_flag;
I'd be more than happy to ditch Data::Dumper in favor of a simple database, but the ones I've played with (NDBM_File, SDBM_File, etc.) all seem to do a final untie() at the end to re-write the database, putting me back in the same boat.
What's a good way out of this with the least amount of module installation?
Thanks.
Update:
Wow, that was really a "Bread good, fire bad" moment. Yes, my "database" is nothing more than a Data::Dumper printout. The easy solution ocurred to me on the drive home: Child processes create a new hash of things to change, then re-read the original hash from file, merge the changes, re-save.
Next time: more caffeine, less knee-jerk question posting.
In reply to Forking and database updates by delirium
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |