I have an existing system were a main pgm executes a number of instances of a perl pgms and then suspends it self with a kill -stop $$. The submitted pgm write their pid to a common file and then when finished remove their pid. The very last instance to finish deletes the common file and issues a a kill -cont for the main pgm.
The problem is that even though all the sub'ed pgms finish their are still pids left in the common file. Hence the last instance doesn't detect it is the last and doesn't issue a kill -cont.
I can demonstrate the code doesn't work by cutting it down to essentials but I am not clear as to why it doesn't work. Multiple instances could open the common file at the same time but their is a flock which should sreialize the writes to the file.
Can this happen - instance 1 & 2 opens the file. Instance flocks first and rewrites with its pid removed. When instance 1 finishes instance 2 flocks the file. However the state of the file that instance 1 sees still has instance 1's pid in it. So instance 2 rewrites the file with instance1 pid and puts it back after it having just been removed.
Is this plausible? A code snippet below.
Thanks for any help you can give me.
open(PID, "> $CONF{upd_lockfile}") ; # Print all PIDs except this one. # flock(PID, 2); seek(PID, 0, 2); for my $a (@pids) { print(PID "$a\n") if ($a != $$); } flock(PID, 8); close(PID);
In reply to File lockingProblem by ric.techow
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |