delirium has asked for the wisdom of the Perl Monks concerning the following question:
I wanted to avoid any possible race conditions when updating the file. The file's content may shrink, so I didn't want to open the file +<. I settled on creating .l files, establishing locks on those, and doing reads and updates on the history file only after establishing locks on the .l file.
My question is: Am I overlooking something dangerous? Access to the history file is restricted, so I'm not too concerned about using do(file) (although though I'll be happy when Storable is installed or Perl is upgraded to 5.8x). I don't see any race conditions that can occur in the code, but I'm still worried if I'm missing something that will occur when more sessions are run simultaneously.
Here is a trimmed down code snippet:
#!/usr/bin/perl -w use strict; use Data::Dumper; use Parallel::ForkManager; my %sess_hist = (); # Session history hash my %hash = (); # session flow information my $session; # ID of current session my $hist_file = '~/hist.dat'; # Session history file ## Check for sessions that are due/overdue and run them for (keys %{$hash{Session}}) { $session = $_; my $pid = $pm->start($session) and next; if (&check_overdue) { if (open SESSLOCK, '>', "~/$session.l") { my $stime = time; flock SESSLOCK, 2; &parse_hist; if ($stime > $sess_hist{$session}{last}) { &run_session; } else { logit "Aborting to avoid simultaneous sessions"; } close SESSLOCK; unlink "~/$session.l"; } else { logit "Can't open session lock file, aborting"; } + } exit(0); $pm->finish($session); } $pm->wait_all_children; unlink "$hist_file.l"; sub parse_hist { # Opens hist file and pulls info into %sess_hist if (-s $hist_file) { # If the history file has +data, local $/ = undef; # set slurp mode, open HFL, '>', "$hist_file.l"; # History File Lock flock HFL, 2; # establish an exclusive l +ock, %sess_hist = %{do($hist_file)}; # and load hist file as %s +ess_hist close HFL; } } sub merge_hist_changes { # Create copy of current session data my %temp_hash = %{$sess_hist{$session}}; # Reload hist file after establishing lock on .l file open HFL, '>', "$hist_file.l"; flock HFL, 2; local $/ = undef; %sess_hist = %{do($hist_file)} if -s $hist_file; # Change reference to point to copy made earlier $sess_hist{$session} = \%temp_hash; # Dump it local $Data::Dumper::Indent = 1; open HF, '>', $hist_file; print HF Dumper \%sess_hist; close HF; close HFL; # Release lock and let next child update $hist_file }
Thanks for any input.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Flocking .l file instead of file to be updated
by duff (Parson) on Dec 02, 2003 at 15:30 UTC | |
|
Re: Flocking .l file instead of file to be updated
by Anonymous Monk on Dec 02, 2003 at 17:11 UTC | |
|
Re: Flocking .l file instead of file to be updated
by Anonymous Monk on Dec 02, 2003 at 17:23 UTC |