http://qs1969.pair.com?node_id=590619

eyepopslikeamosquito has asked for the wisdom of the Perl Monks concerning the following question:

A workmate asked me the best way to ensure that only one copy of his Perl script is running at a time. From a quick SuperSearch, a couple caught my eye:

I was intrigued by the adamk trick, so tested it with:

use strict; use warnings; use Fcntl qw(:flock); print "start of program\n"; unless (flock(DATA, LOCK_EX|LOCK_NB)) { print "$0 is already running. Exiting.\n"; exit(1); } print "sleeping 15...\n"; sleep(15); print "end of program\n"; __DATA__ This exists so flock() code above works. DO NOT REMOVE THIS DATA SECTION.
This works fine on Unix. On Windows, however, instead of exiting with a nice "already running" message, the perl script surprised me by printing nothing and exiting immediately with an exit code of zero (this is the same perplexing behaviour reported by BrowserUk in Re: Why no LockFile() in Win32API::File ?).

Stepping through the code in the debugger reveals the root cause of this odd behaviour. When the file is locked on Windows, you can open it ok but you get an error when you attempt to read from it. Now, the lower layers of the Perl IO system detect the read error all right (setting PerlIOBase(f)->flags |= PERLIO_F_ERROR), but the first PerlIO_getc() call in Perl_sv_gets() returns EOF ... so perl dutifully parses and runs an empty file, which explains why nothing happens and you get an exit code of zero.

So, being pressed for time, I suggested this variation on adamk's cute trick:

use strict; use warnings; use Fcntl qw(:flock); my $lockfile = 'mylockfile'; sub BailOut { print "$0 is already running. Exiting.\n"; print "(File '$lockfile' is locked).\n"; exit(1); } print "start of program\n"; open(my $fhpid, '>', $lockfile) or die "error: open '$lockfile': $!"; flock($fhpid, LOCK_EX|LOCK_NB) or BailOut(); print "sleeping 15...\n"; sleep(15); print "end of program\n"; # Note: lock should be automatically released at end of program, # no matter how the process is terminated.
Though this simple solution seems sound to me, it is easy to overlook subtle problems and race conditions. So if anyone can see a flaw in this solution or has any other advice, please respond away.

Update: the above test program still seems to work the same way today (2020).

Reference Added Later

Replies are listed 'Best First'.
Re: Ensuring only one copy of a perl script is running at a time
by f00li5h (Chaplain) on Dec 19, 2006 at 10:53 UTC

    File::Pid may be helpful to you.

    Also, merlyn wrote Highlander: one instance at a time (Oct 00). Although it does use flock

    I'll skip the ever-so-tempting OS trolling ...

    @_=qw; ask f00li5h to appear and remain for a moment of pretend better than a lifetime;;s;;@_[map hex,split'',B204316D8C2A4516DE];;y/05/os/&print;
Re: Ensuring only one copy of a perl script is running at a time
by jonadab (Parson) on Dec 19, 2006 at 13:49 UTC
    When the file is locked on Windows, you can open it ok but you get an error when you attempt to read from it.

    I'm trying to think of a situation wherein that behaviour would be useful or helpful, and I'm coming up blank. Now, if you only got an error when you tried to write to the file, I can see how that could be useful, but only when you try to read? If you can't even read, what's the point in being able to open?

    But if that's the way it's always been, that's the way it'll stay. ISVs will have found thousands of ways to rely on the existing behavior, even if it makes no sense, and changing it would break them all.

    So you may as well join them and actually (try to) read from the file to see if it's locked or not.


    Sanity? Oh, yeah, I've got all kinds of sanity. In fact, I've developed whole new kinds of sanity. You can just call me "Mister Sanity". Why, I've got so much sanity it's driving me crazy.

      If you can't even read, what's the point in being able to open?
      You need the open to succeed because you need its returned file descriptor as the first argument to the system call to test if the file is locked.

      The main issue here seems to be mandatory versus advisory locking. Windows uses mandatory locking, while Unix uses advisory locking (by default). The Windows locking behaviour described above is actually similar to that of (SVR3/4-based) Unix systems that support mandatory locking (via the hack of turning on the file's set-group-ID bit and turning off its group-execute bit). On such Unix systems, if a file is mandatory write-locked, the open succeeds, while a non-blocking read fails with EAGAIN and a blocking read blocks until the write-lock is removed.

Re: Ensuring only one copy of a perl script is running at a time
by Skeeve (Parson) on Dec 19, 2006 at 16:28 UTC

    If I can't use flock or a module, I usually go to create a "Semaphore" file, meaning, I define a filename and location and create a 0 byte file there. When the program starts, it tries to rename the file to some temporary name. If this succeeds, the program may run. If not, I retry or bail out.

    the file has to be re-renamed when the program is done.

    this algorithm assumes that file renaming is an atomic process, meaning that it can't be interrupted by the system and that a program, which renamed the file, really did so.


    s$$([},&%#}/&/]+}%&{})*;#$&&s&&$^X.($'^"%]=\&(|?*{%
    +.+=%;.#_}\&"^"-+%*).}%:##%}={~=~:.")&e&&s""`$''`"e
Re: Ensuring only one copy of a perl script is running at a time
by ferreira (Chaplain) on Dec 19, 2006 at 16:20 UTC
    You may want to try Sys::RunAlone by Elizabeth Mattijsen as well.
Re: Ensuring only one copy of a perl script is running at a time
by benizi (Hermit) on Dec 20, 2006 at 00:21 UTC

    Not sure if it's relevant to your situation, but this won't work over (some?) network filesystems. In my case, I have a home directory in AFS. Using either $0 or DATA flock'ing, it prevents two scripts on the same machine from running the same script. But, a script run from another machine proceeds as if nothing's flock'ed. I could see this being an issue if, say, a pool of webservers was serving your files.

    A more robust solution might be to use a relational database system's locking mechanism. (This is often pretty convenient if you're already using a DB for other tasks.) My favorite method is to create a lock table as follows:

    the MySQL:

    create table foolock ( /* since we're exclusive per-program, make program primary */ program varchar(64) primary key, client varchar(255) not null, time timestamp );

    and the Perl

    #!/usr/bin/perl use strict; use warnings; use File::Basename; use Sys::Hostname; use DBI; my $dbh = DBI->connect('dbi:mysql:test:host','foolocker','foopass',{RaiseErr +or=>1}); my $lock_interval = 10; # minimum wait in milliseconds between attempt +s my $max_attempts = 5; # maximum number of attempts sub prog_client { ((basename $0), hostname.":$$"); } sub get_lock { sql_lock('insert into foolock (program, client) values (?,?)', pro +g_client); } sub finish_up { sql_lock('delete from foolock where program = ? and client = ?', p +rog_client); } sub sql_lock { my ($sql, @params) = @_; my $lock = $dbh->prepare_cached($sql); my $backoff = $lock_interval / 1000; for (1..$max_attempts) { my $got = eval { $dbh->do('lock tables foolock write'); $lock->execute(@params) >= 0; }; $dbh->do('unlock tables'); return 1 if $got; select undef, undef, undef, $backoff; $backoff *= 2; } return 0; } if (get_lock) { # do your thing finish_up; } else { # take appropriate action }

    The get_lock and finish_up allow you to detect (via entries in the foolock table) when a script died without finishing. But, in the way it's laid out above, this prevents other processes from then acquiring locks. (Which for my task was desirable.)

Re: Ensuring only one copy of a perl script is running at a time
by sgt (Deacon) on Dec 19, 2006 at 21:53 UTC

    I have often used the algorithm described in LockFile::Simple by Rafael Manfredi which uses atomic operations of the file system. Not perfect but simple enough that it can be also implemented in pure shell (no external commands) which can be useful sometimes. It can even (mostly) work over NFS/NAS supposing a not-too-dumb NFS implementation.

    The windows behavior is certainly confusing. what is the point of having a supposatly portable implementation of locking (perl's flock) if you cannot use it on some platform; I would rather have it die saying not implemented so that one knows a work-around is needed. I think it's a bug (at least a documentation bug)

    hth --stephan
Re: Ensuring only one copy of a perl script is running at a time
by Limbic~Region (Chancellor) on Dec 23, 2006 at 17:46 UTC
Re: Ensuring only one copy of a perl script is running at a time
by Moron (Curate) on Dec 19, 2006 at 16:54 UTC
    As I understand it, the first argument to flock must be a filehandle. So the fact that the tricky use of __DATA__ works on unix seems better characterised as an unclosed loophole and your own solution as the correct version that accords with the manual.

    -M

    Free your mind

      The problem is that Windows uses mandatory locking, while unix uses advisory locking. It has nothing to do with DATA. Perl doesn't check if the file is locked, so advisory locks are completely ignored. However, when mandatory locking is involved, perl can't read the source file when it's locked.

      DATA is a filehandle to the file being executed. It's well known that one can seek to offset 0 of DATA to read the source code. Locking DATA is the same thing as locking the file whose name is in $0. It doesn't matter how you lock the script (using DATA or $0 (as shown below)), the problem still exists.

      use strict; use warnings; use Fcntl qw(:flock); print "start of program\n"; open(my $script_fh, '<', $0) or die("Unable to open script source: $!\n"); unless (flock($script_fh, LOCK_EX|LOCK_NB)) { print "$0 is already running. Exiting.\n"; exit(1); } print "sleeping 15...\n"; sleep(15); print "end of program\n"; __DATA__ This exists so flock() code above works. DO NOT REMOVE THIS DATA SECTION.

      Update: Updated the non-code portion for clarity.

        While executing the above script the contents of the given file in the script gets deleted.WHY????
        The fact that locking behaviour differs in general by platform is not relevant because (see manual link labelled 'it' in earlier reply) flock deliberately uses its own advisory locking irrespective of platform precisely for such reasons of portability.

        -M

        Free your mind

Re: Ensuring only one copy of a perl script is running at a time
by bayugyug (Beadle) on Dec 21, 2006 at 06:16 UTC
    hi
    to ensure 1 instance of the application running using perl
    i have a suggestion
    1. pls open a specific port for locking
    port must be greater than 10000 and less than 60000
    ex: port 28004
    2. so as long as the port is open, then, meaning there is still program running on that particular port
    3. upon program exit, close the port
    i have an ex code below:
    hope this helps
    its a class
    how to use??



    #-----------------------------------------------------


    use PortLocker();
    my $port = 52480;
    my $locker = new PortLocker();
    if(! $locker->lock())
    {
    die("Port Already lock, pls check if already running!\n");

    #--- failed, port already locked
    }
    #--- continue here your program

    . #then before exit, pls unlock the port

    $locker->unlock();


    #-----------------------------------------------------






    PortLocker.pm ( code )

    #----------------------------------------------------------------------------------------------
    # Filename : PortLocker.pm
    # Description : this is the package that will lock a specified port#
    # Date : 08-10-2001
    # Ver : ver1.1
    # Author : bayugyug
    #----------------------------------------------------------------------------------------------



    package PortLocker;
    use IO::Socket;

    #--- set vars here
    my $portlock = undef;
    use constant PORT_LOCK => 28788;


    #---
    $| = 1;
    my $VERSION = "1.01";





    sub new
    {


    my $type = shift;
    my $self = {};
    $self->{'PORT'} = $_[0];
    $self->{'PORT'} = PORT_LOCK if(int($self->{'PORT'}) <= 0);
    bless $self, $type;

    }



    sub lock()
    {

    my $self = shift;
    my $port = $self->{'PORT'};


    $portlock = new IO::Socket::INET (
    LocalHost =>
    'localhost',
    LocalPort => $port,
    Proto => 'tcp',
    Listen => 1
    );

    $self->debug("lock() :: port ($port) ok");
    #--- return ok/fail

    unless( $portlock )
    {
    return 0;
    }

    return 1;
    }


    sub unlock()
    {

    my $self = shift;
    if(defined($portlock))
    {
    close($portlock);
    }

    $portlock= undef;
    $self->debug("unlock() :: ok");
    }


    #---
    sub debug()
    {
    my $self = shift;
    my (@parm) = (@_);
    my $msg = join(' : ',@parm);
    my $scr = "Ver$VERSION";

    print "$scr - $msg\n";
    }


    1;


    (END) #----------------------------------------------------- hope this helps, tnx #!/bin/bayugyug

      How do you know port 52480 isn't already in use? This port lies within the Ephemeral Port Range of most modern Unix systems, and could at any time be used by an unrelated TCP session.
Re: Ensuring only one copy of a perl script is running at a time
by PockMonk (Beadle) on Dec 19, 2006 at 16:52 UTC
    "A workmate asked me the best way to ensure that only one copy of his Perl script is running at a time".

    I just developed a perl script that needed to ensure exactly that since it is CGI and can be run my multiple users over the internet and I need to ensure only one conucrrent usage. I simply created a file called "status.txt" ont he server, and had the script overwrite its status on the file contents at start and exit - 0 was disabled, 1 was enabled but not in use, 2 was enabled and in use. I'm sure its not the quickest or best way, but as always with Perl, TMTOWDI and all that, and its simple and works just fine.

    It also has the benefit of letting you examine the file contents yourself and deciding on action based on them. You could also use this to set a limit of say 2 concurrent access, or 3 or 4 or... you get the point, by simply incrementing on each script start and decrementing (is that a word?) on each script exit.

      It sounds like it is time for you to update your computer science knowledge by learning about race conditions.

      I need to ensure only one conucrrent usage [....] works just fine

      It works just fine as far as you have noticed so far. It certainly doesn't "ensure" only one concurrent use; it more like usually prevents more than one concurrent use. (:

      You code must perform the following steps:

      1. Check current status
      2. if not 1 then exit
      3. Set current status to 2
      4. Do work
      5. Set current status back to 1

      And, in a modern computer system, CPU resources are shared, so each process that is serving a CGI request can be interrupted between any of those steps (or in the middle of steps) in order to let some other process do some work for a bit. Two CGI requests coming it at roughly the same time can thus perform those steps in the following order:

      One process Other process my $status= CheckStatus(); exit if 1 != $status; my $status= CheckStatus(); exit if 1 != $status; SetStatus( 2 ); SetStatus( 2 ); DoWork(); DoWork(); SetStatus( 1 ); SetStatus( 1 );

      Note that they both see the status as "1" and both end up running concurrently. This is why operating systems provide locking mechanisms and why you often need to use such.

      - tye        

        You will be happy to learn that I do use flock on opening the file so there is no danger of what you describe happening.

        Rather than locking a file for the duration of the script, I simply lock it while changing its status etc. As posted, this allows me greater flexibility to post more info than just in use/not in use by using the file content to write codes to. Thanks anyway!

        Dan

      That's wrong. It suffers from a race condition.

      +===============================+===============================+ | Process 1 | Process 2 | +===============================+===============================+ | open the status file | | T | read the status (status is 1) | | i +-------------------------------+-------------------------------+ m | | open the status file | e | | read the status (status is 1) | | | | write 2 to the status file | | | | proceed | | +-------------------------------+-------------------------------+ v | write 2 to the status file | | | proceed | | +===============================+===============================+

      If you use flock, then it's just an extention of what the OP posted.

      ( Oops! Seems like tye posted something similar when I was writing this node. )

      A reply falls below the community's threshold of quality. You may see it by logging in.
Re: Ensuring only one copy of a perl script is running at a time
by tokpela (Chaplain) on Mar 25, 2011 at 00:13 UTC

    On Windows, I used Win32::Mutex to ensure that a process was not already running.

    However, I seem to remember that this only worked on a per user basis and did not prevent another user from running the same script. But since I was running using a scheduled task, it worked well.

      What about?

      $cmd = 'ps -eaf | grep -c "\< '."$0".'\>"'; chomp($instances = `$cmd`); if($instances > 1) exit;
Re: Ensuring only one copy of a perl script is running at a time
by Anonymous Monk on Mar 23, 2011 at 06:27 UTC
    Whatever the file we mentioned in the script, rather than the lock on the file the content in the file gets deleted.Why?