SIGSEGV has asked for the wisdom of the Perl Monks concerning the following question:

Hello Perl IPC Wizards,

I have written a script to do some Unix/DBMS admin tasks while running as a daemon.
I used the POSIX::setsid() call and did the usual preparations as suggested in "perldoc perlipc" to auto-background.

Before that however I created a kind of semaphore file that holds the child's new PID of the daemon process and aqcuired an exclusive lock on it.

Despite, when I run the script anew while one daemon is running the flock call (that I or'ed to LOCK_NB in order to send a warning mail or something similar before the die()) returns a true value which screws my intended logic (provided there is any) altogether.

First I thought the reason lies in my usage of localized globs for the filehandles that could go out of scope, and thus close my semaphore file inadvertently.
But this can't be since I keep the refcount alive by returning the file handle globs from the initilization sub which are assigned to lexically scoped variables by the caller.

I also keep on logging to another file through one of these variables. So the refernces must still be valid.

Is the backgrounded child somehow losing the file lock by becoming a session leader?

I ask just for curiosity because I could easily insert some logic that sends a "kill 0 => $pid" to the PID read from the semaphore file to check if another deamon is running already. But I rather would like to do it by the use of file locks.

Replies are listed 'Best First'.
Re: Keeping File Locks after Daemonization (fork)
by tye (Sage) on Jan 30, 2003 at 18:46 UTC

    "auto-backgrounding" uses fork (I assume). flock says:

    On systems that support a real flock(), locks are inherited across fork() calls, whereas those that must resort to the more capricious fcntl() function lose the locks, making it harder to write servers.
    so I'd be sure to not grab the lock until after forking.

                    - tye
Re: Keeping File Locks after Daemonization
by cees (Curate) on Jan 30, 2003 at 19:02 UTC

    It might help to see some code, especially since you are passing file handles around...

    Also, if you have access to strace you can use it to track down where the file is being closed and where you are loosing your lock.

    strace -f perl file.pl

    The -f will tell it to keep tracing if the program forks. Look for any close() calls that look out of position.

    I was actually working on a similar program yesterday, using the Proc::Daemon module which does the backgrounding for you, and the Proc::PID::File to generate pid files.

    use Proc::Daemon; use Proc::PID::File; Proc::Daemon::Init; my $pf = new Proc::PID::File(dir => '/tmp'); die "Already running!" if $pf->alive(); # Child code goes here

    This will daemonize the process and create a pid file for you. Probably not exatcly what you are looking for, but might give you some hints.

      have you tested that well? i just days ago did the same and found...

      # Proc/PID/File.pm sub alive { my $self = shift; my $pid = $self->read(); print "> Proc::PID::File - pid: $pid" if $self->{debug}; return $pid if $pid && $pid != $$ && kill(0, $pid); # $self->write(); return 0; } # i've commented out the write() because i don't think that testing # for aliveness should write a pidfile... sub DESTROY { my $self = shift; $self->remove() if $self->{path} and $self->{written}; } # i've added a written test so i don't remove pidfiles that aren't min +e # add this to the end of write() $self->{written} = 1; # add this to file() $self->{written} = 0;

      because:

      die "Already running!" if Proc::PID::File->new()->alive();
      would delete the pidfile that it found...

        After looking at the code a little closer, it looks like there is a race condition in the 'alive' function in Proc::PID::File too, since it releases the lock after reading the PID, and then re-aquires the lock later for writing. The lock should be kept from the read right through to the write.

        I ran into some other problems as well with perl 5.8.0 and Proc::Daemon. There appears to be a reference counting problem/bug in perl 5.8.0, which doesn't appear in 5.6.1. It has to do with the POSIX::close function. The standard perl close function takes a File Handle, but the POSIX::close function takes a File Descriptor. If you open a file normally, and then close it's File Descriptor with POSIX::close, then perl will still think the file is open and linked to the File Descriptor that was just closed. For example:

        open TEST, "+> /tmp/output"; # uses File Descriptor 3 POSIX::close(3); # closes the file, but TEST # is still linked to FD 3 open TEST2, "+> /tmp/output2";# uses FD 3 since it is # avaliable again print TEST "Testing\n"; # Will be written to output2 close TEST2; # does not close the file, # because perl thinks TEST # still has it open. close TEST; # Now output2 is closed # and both file handles are # closed properly

        This looks like a contrived example, but POSIX::close is used by Proc::Daemon to close all open files right after the fork and setsid occur. So if you have any open files before the fork they will be improperly closed. I had an __END__ block in my code which triggers the DATA filehandle to be opened for you (even though I didn't have __DATA__ in my code, the __END__ triggers it anyway).

        Since DATA was only half closed, it ended up causing a deadlock in the Proc::PID::File call to 'alive' (I won't go into details why, since this post is getting long enough :).

        It looks like I might have to roll my own code for this after all...