http://qs1969.pair.com?node_id=977369

temporal has asked for the wisdom of the Perl Monks concerning the following question:

I have a script that gets called by a daemon. It executes a task that sometimes takes longer than the time between this daemon's scheduled runs. The task is resource intensive enough to where I don't want multiple instances of this script running at once.

To keep this from happening I create a lockfile. If this lockfile is present when the script is run it immediately exits. This file is then deleted at the end of the script.

This setup ran smoothly until recently the system that this script was running on experienced an unscheduled restart. Unfortunately it was in the middle of executing the aforementioned script and a lockfile had been created. The script was forced to stop and the lockfile remained. Curses!

So obviously when the system came back online and the daemon restarted the script stopped being run because the lockfile was permanently created.

What I'd really like to achieve would be a bulletproof lockfile that can withstand this kind of situation. I remembered that File::Temp has an nice UNLINK property that forces the file to be deleted when the process exits. I'm not sure if this happens when the process is killed, however. But that's the sort of functionality that I'm looking for. Also, I don't think you can name the temp files that this module creates.

I looked around on CPAN and found several lockfile modules. It looks like their solution is to add a timer to the lockfile. This doesn't really work for me since this process' runtime can vary quite a bit.

Another solution is to use the PID lockfile and just write a check into the script that checks if that PID currently exists to verify that the lockfile isn't stale. I don't know enough about PID assignment to know how reliable this would be. I was a little surprised that none of the lockfile modules had this check built in.

I could also write a signal handler which would delete my lockfile when a kill signal is received since iirc Perl does not call the END routines when it gets a kill signal.

Long question short - is there a module or a better way of creating reliable lockfiles?

Replies are listed 'Best First'.
Re: reliable lockfiles? (lock)
by tye (Sage) on Jun 20, 2012 at 16:05 UTC

    Create the lock file if it doesn't exist. Then lock it, exclusively (such as via flock). No matter how your process dies, the lock will be released when the process is no longer running. Trying to lock the lock file will fail only if another process is already running. You don't even have to worry about a PID getting re-used. Don't have code that deletes the lock file. For monitoring convenience, after locking of the file succeeds, write your PID to the file.

    This also has the major advantage of preventing a classic race condition where you end up with two instances running.

    - tye        

Re: reliable lockfiles?
by atcroft (Abbot) on Jun 20, 2012 at 15:47 UTC

    Regarding the part of your comment regarding PIDs, I believe you can kill 0, $pid to determine if a signal can be sent to that PID. If so, then verify that the PID is for a process executing the same script/program (by parsing the result of a 'ps' command, or using a module such as Proc::ProcessTable).

    # Untested use Proc::ProcessTable; my $tobj = new Proc::ProcessTable; my $proctable = $tobj->table(); my $pid = 0; for ( @$proctable ) { if ( $_->cmndline =~ m/$script_name/ ) { $pid = $_->pid; last; } } print $script_name, ( kill 0, $pid ? q{appears} : q{does not appear} ), q{ to be running}, qq{\n};

    Hope that helps.

Re: reliable lockfiles?
by moritz (Cardinal) on Jun 20, 2012 at 16:07 UTC
    What I'd really like to achieve would be a bulletproof lockfile that can withstand this kind of situation. I remembered that File::Temp has an nice UNLINK property that forces the file to be deleted when the process exits. I'm not sure if this happens when the process is killed, however. But that's the sort of functionality that I'm looking for.

    There are two approaches for deleting temp files. The first is an END block that deletes the file (but that doesn't help in the case of power outage), and the second is to call unlink while the file is still open. on UNIX systems that gives hides the file from all processes, but it remains on disc until no more file handles to it are open. Of course that's no good for locking either.

    Another approach to the lock files is to put them into a location that the operating system clears out at boot time (/var/lock/ on Debian systems), or to put them on RAM discs in the first place, whose contents automatically disappear at power down/reboot.

    I realize that all of that isn't exactly what you're looking for, but maybe it still gives you some ideas.

Re: reliable lockfiles?
by temporal (Pilgrim) on Jun 20, 2012 at 16:31 UTC

    Thanks for the great replies!

    I think I'll use flock as tye suggested. Seems to do exactly what I'm looking for. I like the idea of using directory that gets cleared out on boot like moritz suggested as well. That was my initial thought for using File::Temp, to get access to the system specified temp directory.

    Strange things are afoot at the Circle-K.

Re: reliable lockfiles?
by tweetiepooh (Hermit) on Jun 20, 2012 at 15:04 UTC
    If you are going to check the process tree why not simply check if multiple copies of the process are running and die if so. If the lock file is used else where then if this is the only copy running then drop and recreate the lock file.