delirium has asked for the wisdom of the Perl Monks concerning the following question:
The problem I'm running into is that I'm reading the database into a hash, and dumping the hash back to the database at the end of the script. The hash copies in each fork don't update the parent's hash, and I'm left with the hash as it was at load time being re-written. Here is some dumbed-down code to illustrate:
#!/usr/bin/perl -w use strict; use Data::Dumper; use Parallel::ForkManager; my $pm = new Parallel::ForkManager(10); $pm->run_on_finish ( sub { my (undef, $exit_code, $ident) = @_; $update_flag = 1 if $exit_code; } ); my %sess_hist = (); &load_database; for my $session (keys %{$hash{Session}}) { my $pid = $pm->start($session) and next; &run_session($session) if &check_overdue($session); # &run_session updates %sess_hist with new stats else { exit (0); } $pm->finish($session); } $pm->wait_all_children; &save_database if $update_flag;
I'd be more than happy to ditch Data::Dumper in favor of a simple database, but the ones I've played with (NDBM_File, SDBM_File, etc.) all seem to do a final untie() at the end to re-write the database, putting me back in the same boat.
What's a good way out of this with the least amount of module installation?
Thanks.
Update:
Wow, that was really a "Bread good, fire bad" moment. Yes, my "database" is nothing more than a Data::Dumper printout. The easy solution ocurred to me on the drive home: Child processes create a new hash of things to change, then re-read the original hash from file, merge the changes, re-save.
Next time: more caffeine, less knee-jerk question posting.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Forking and database updates
by mpeppler (Vicar) on Nov 11, 2003 at 22:12 UTC | |
|
Re: Forking and database updates
by Roger (Parson) on Nov 11, 2003 at 22:56 UTC | |
|
Re: Forking and database updates
by perrin (Chancellor) on Nov 11, 2003 at 22:07 UTC | |
|
Re: Forking and database updates
by blue_cowdawg (Monsignor) on Nov 11, 2003 at 21:42 UTC |