in reply to Keeping File Locks after Daemonization

It might help to see some code, especially since you are passing file handles around...

Also, if you have access to strace you can use it to track down where the file is being closed and where you are loosing your lock.

strace -f perl file.pl

The -f will tell it to keep tracing if the program forks. Look for any close() calls that look out of position.

I was actually working on a similar program yesterday, using the Proc::Daemon module which does the backgrounding for you, and the Proc::PID::File to generate pid files.

use Proc::Daemon; use Proc::PID::File; Proc::Daemon::Init; my $pf = new Proc::PID::File(dir => '/tmp'); die "Already running!" if $pf->alive(); # Child code goes here

This will daemonize the process and create a pid file for you. Probably not exatcly what you are looking for, but might give you some hints.

Replies are listed 'Best First'.
Re: Re: Keeping File Locks after Daemonization
by zengargoyle (Deacon) on Jan 31, 2003 at 01:48 UTC

    have you tested that well? i just days ago did the same and found...

    # Proc/PID/File.pm sub alive { my $self = shift; my $pid = $self->read(); print "> Proc::PID::File - pid: $pid" if $self->{debug}; return $pid if $pid && $pid != $$ && kill(0, $pid); # $self->write(); return 0; } # i've commented out the write() because i don't think that testing # for aliveness should write a pidfile... sub DESTROY { my $self = shift; $self->remove() if $self->{path} and $self->{written}; } # i've added a written test so i don't remove pidfiles that aren't min +e # add this to the end of write() $self->{written} = 1; # add this to file() $self->{written} = 0;

    because:

    die "Already running!" if Proc::PID::File->new()->alive();
    would delete the pidfile that it found...

      After looking at the code a little closer, it looks like there is a race condition in the 'alive' function in Proc::PID::File too, since it releases the lock after reading the PID, and then re-aquires the lock later for writing. The lock should be kept from the read right through to the write.

      I ran into some other problems as well with perl 5.8.0 and Proc::Daemon. There appears to be a reference counting problem/bug in perl 5.8.0, which doesn't appear in 5.6.1. It has to do with the POSIX::close function. The standard perl close function takes a File Handle, but the POSIX::close function takes a File Descriptor. If you open a file normally, and then close it's File Descriptor with POSIX::close, then perl will still think the file is open and linked to the File Descriptor that was just closed. For example:

      open TEST, "+> /tmp/output"; # uses File Descriptor 3 POSIX::close(3); # closes the file, but TEST # is still linked to FD 3 open TEST2, "+> /tmp/output2";# uses FD 3 since it is # avaliable again print TEST "Testing\n"; # Will be written to output2 close TEST2; # does not close the file, # because perl thinks TEST # still has it open. close TEST; # Now output2 is closed # and both file handles are # closed properly

      This looks like a contrived example, but POSIX::close is used by Proc::Daemon to close all open files right after the fork and setsid occur. So if you have any open files before the fork they will be improperly closed. I had an __END__ block in my code which triggers the DATA filehandle to be opened for you (even though I didn't have __DATA__ in my code, the __END__ triggers it anyway).

      Since DATA was only half closed, it ended up causing a deadlock in the Proc::PID::File call to 'alive' (I won't go into details why, since this post is getting long enough :).

      It looks like I might have to roll my own code for this after all...