in reply to [Solved]Easiest way to protect process from duplication.

For this sort of problem my preferred solution is to simply take an exclusive non blocking lock on a file with flock. Then the operating system does the checking for me. For nice error messages I usually let the process that takes the lock write its pid in the file (or another file if I need it to work on Windows) so that a program that fails to take the lock can just read the pid from the lockfile and mention that in a warning message.

Don't delete the lockfile when you are done by the way. That can lead to subtle races.

Replies are listed 'Best First'.
Re^2: Easiest way to protect process from duplication.
by locked_user sundialsvc4 (Abbot) on Jan 24, 2012 at 14:58 UTC

    Oh?   Interesting.   Can you edify us as to what that subtle race condition is?

      Suppose the sequence in the program is:
      open lock unlink exit
      The unlink comes before any unlock or close. Otherwise you get even more race scenarios (on Windows you must actually close before being able to delete) The open is an open with create (O_CREAT), otherwise unlinking makes the next program invocation fail but without exclusive (O_EXCL) otherwise we are getting into a different locking system (with even more problems).This type of open is what you get if you do a plain open($fh, ">", $file) in perl.

      Now you can get as sequence:

      process A: open (and create) process A: lock process B: open (same file so no create) process A: unlink process A: exit (implicit unlock) process B: lock (on the file A just deleted since B still has an open +handle on it) process C: open (and create a new file with the old path name) process C: lock (on the new file)
      Now process B and C are running simultaneously with locks on different files of which only one is visible in the filesystem