Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris

flock() broken under FreeBSD?

by tye (Sage)
on Jul 26, 2002 at 23:47 UTC ( [id://185697] : perlmeditation . print w/replies, xml ) Need Help??

A while ago I had a problem where some locking code (in Perl) wasn't working like I expected. I pulled the locking code out and tested it and eventually began to suspect that flock() was just broken under FreeBSD (this was not a suspicion I came to easily; it was a bit shocking to think that what I consider a high quality operating system might have gotten such an important basic item wrong).

So I implemented a flock() work-alike that used fcntl() locks under the covers. Using fcntl() locks made everything work the way I expected.

Anyway, I was reminded of this recently by a fellow monk and this is actually code that is used on the PerlMonks' back end. So I'm publishing it here in case someone can show me my mistake or verify my suspicion. I'll forward it to FreeBSD if it looks like a real bug.

Some details. According to, Perl was built using native flock() not flock() emulation:

$ perl -MConfig -le 'print $Config{d_flock}' define
A sample test run that shows the locks working via flock() (the bug is intermittent):
$ ./locktest & sleep 4; ./locktest [1] 64228 Using flock()... 64228 shares. 64228 owns 64228 shares Using flock()... 64233 shares. 64233 waiting for previous instance(s) to exit... 64228 leaving to allow new instance to run. 64233 owns Running... 64233 owns 64233 shares ^C
And a sample run that shows how things break (more often than they work):
$ ./locktest & sleep 4; ./locktest [1] 64308 Using flock()... 64308 shares. 64308 owns 64308 shares Using flock()... 64310 shares. 64310 waiting for previous instance(s) to exit... 64308 owns 64308 shares 64308 owns 64308 shares ^C $ kill %1
The important part is:
64310 shares. 64310 waiting for previous instance(s) to exit... 64308 owns
which shows process 64310 getting a shared lock (and holding it) and then the other process (64308) successfully getting an exclusive lock. You should not be able to get an exclusive lock if anyone else has a shared lock.

And here's how to run it with fcntl() locks instead (which, if it ever fails, certainly only does so extremely rarely):

$ ./locktest 1 & sleep 4; ./locktest 1 [1] 64442 Using fcntl() locks... 64442 shares. 64442 owns 64442 shares Using fcntl() locks... 64446 shares. 64446 waiting for previous instance(s) to exit... 64442 leaving to allow new instance to run. 64446 owns Running... 64446 owns 64446 shares ^C
And (just for completeness), here is a run showing what happens if you start the second instance at just the wrong time. This is just a case that I didn't want to handle in this sample code as it makes the code too complicated and doesn't really have anything to do with what I'm reporting except that I wanted to mention it so that no one would get distracted if they happened to run into it:
$ ( sleep 1; ./locktest 1 ) & sleep 8; ./locktest 1 [1] 64887 Using fcntl() locks... 64890 shares. 64890 owns 64890 shares 64890 owns Using fcntl() locks... 64891 can't lock self: Resource temporarily unavailable 64890 shares $ kill %1
and then the source code:
#!/usr/bin/perl -w # use strict; use Fcntl qw( LOCK_SH LOCK_EX LOCK_UN LOCK_NB ); # "./locktest" uses flock(), "./locktest 1" uses fcntl() locks. use constant FCNTL => 0<@ARGV; BEGIN { if( ! FCNTL ) { warn "Using flock()...\n"; } else { warn "Using fcntl() locks...\n"; require Fcntl; Fcntl->import( qw( F_GETLK F_SETLK F_SETLKW F_RDLCK F_UNLCK F_WRLCK ) ); eval 'use subs "flock"'; { my $f= *flock } # Don't warn about 'flock' only used once. *flock= sub { my( $fh, $mode )= @_; if( ! ref($fh) && $fh !~ /'|::/ ) { $fh= caller() . "::" . $fh; } my $nb= $mode & LOCK_NB(); my $lock; my $count= 0; $count++, $lock= F_RDLCK() if $mode & LOCK_SH(); $count++, $lock= F_WRLCK() if $mode & LOCK_EX(); $count++, $lock= F_UNLCK() if $mode & LOCK_UN(); if( 1 != $count ) { require Carp; Carp::croak( "$count of LOCK_SH, LOCK_EX, LOCK_UN set, + not 1" ) } # start, len, PID, type, whence: my $struct= pack( "LL LL I S S", 0,0, 0,0, 0, $lock, 0 ); my $op= $nb ? F_SETLK() : F_SETLKW(); return fcntl( $fh, $op, $struct ); }; } } open DATA, "+>lock" or warn "Can't open lock file: $!\n"; my %config= ( delay => 5 ); $|++; flock( \*DATA, LOCK_SH|LOCK_NB ) or die "$$ can't lock self: $!\n"; warn "$$ shares.\n"; if( ! flock( \*DATA, LOCK_EX|LOCK_NB ) ) { warn "$$ waiting for previous instance(s) to exit...\n"; select( undef, undef, undef, rand($config{delay}) ); my $start= time(); my $end; alarm( 5*$config{delay} ); my $oldSig= $SIG{ALRM}; $SIG{ALRM}= sub { warn "$$ previous instance(s) still running!\n"; warn "$$ tho, lock obtained ".localtime($end),$/ if $end; die "$$ ", localtime($start)." .. ".localtime(), $/; }; flock( \*DATA, LOCK_EX ); warn "$$ owns\n"; $end= time(); alarm( 0 ); $SIG{ALRM}= defined($oldSig) ? $oldSig : 'DEFAULT'; warn "Running...\n"; } # Will revert lock to shared below while (1) { if( ! flock( \*DATA, LOCK_EX|LOCK_NB ) ) { warn "$$ leaving to allow new instance to run.\n"; exit( 0 ); } warn "$$ owns\n"; sleep( 1 ); flock( \*DATA, LOCK_SH|LOCK_NB ) or die "$$ can't revert self lock to shared: $!\n"; warn "$$ shares\n"; sleep $config{delay}; } __END__

        - tye (but my friends call me "Tye")

Replies are listed 'Best First'.
Re: flock() broken under FreeBSD?
by dws (Chancellor) on Jul 27, 2002 at 00:48 UTC
    The important part is:
    64310 shares. 64310 waiting for previous instance(s) to exit... 64308 owns
    which shows process 64310 getting a shared lock (and holding it) and then the other process (64308) successfully getting an exclusive lock.
    Are you assuming that you can hold the lock while up- or down-grading it? I did, but the FreeBSD flock(2) man page provides this enlightenment:
    A shared lock may be upgraded to an exclusive lock, and vice versa, simply by specifying the appropriate lock type; this results in the previous lock being released and the new lock applied (possibly after other processes have gained and released the lock).
    I'm guessing that when 64310 hits the (blocking)   flock( \*DATA, LOCK_EX ); the previous shared lock that 64310 held is released, and 64308 sneaks in to own the lock, leaving 64310 blocked. Then 64310 seems to go away, though I can't tell if that's permanent, or a side-effect of your killing off the scripts.

      Following the debugging technique of "imagine how something might happen, then go confirm it," here's a story for how the behavior tye observes might happen. It involves an imagined implemententation of FreeBSD flock(2), and might provide some guidance for someone who cares to dig into the FreeBSD source.

      Assume an OS implementation of flock() that either intentionally or inadvertantly gives priority non-blocking requests. That is, a non-blocking flock() request will be satisfied without unblocking other processes that are waiting to aquire a lock, even though the non-blocking request releases its prior lock first. (Ignore whether this is sensible, and just assume that it's coded that way.)

      Now consider this scenerio: Process A holds a shared lock on F. Process B blocks on a blocking requests to acquire an exclusive lock. Process A makes a non-blocking request to "upgrade" its lock to exclusive. Now, according to the flock(2) man page, this means releasing the shared lock first. But, since the request is a non-blocking one, and since the flock() routine is coded to give priority to non-blocking requests, process A acquires an exclusive lock, even though B was waiting first. B is still blocked. Following the same logic, A can then repetitively "downgrade" the lock to shared, and upgrade to exclusive, all without unblocking B. B is starved until either A makes a blocking flock() request, or A releases the lock by an explicit close or by process termination.

      This is how it might happen, given the code tye provides. Can someone with access to FreeBSD sources (and the will to use them) confirm whether this is what's going on?

        Well I tried my code on Linux and the failure case looks a little different:

        $ ./locktest & sleep 4; ./locktest [1] 1553 Using flock()... 1553 shares. 1553 owns 1553 shares Using flock()... 1557 shares. 1557 waiting for previous instance(s) to exit... 1553 owns 1557 owns Running... 1557 owns 1553 can't revert self lock to shared: Resource temporarily unavailabl +e 1557 shares ^C [1]+ Exit 11 ./locktest
        Which demonstrates that Linux doesn't have the strong preference for non-blocking requests like FreeBSD appears to have.

        Having lock up-/down-grading introduce a race condition where the lock is freed first is such a horrid design choice to my mind that I didn't even consider the possibility when reading "man flock" (this is not even mentioned in Linux's extremely short version of "man flock" tho my test cases show that it is the case there as well).

        Thanks for the enlightenment. Now I have one more reason to hate flock. I should find a module that provides a convenient wrapper for fcntl locks... (:

                - tye (but my friends call me "Tye")
Re: flock() broken under FreeBSD?
by Anonymous Monk on Jul 29, 2002 at 13:08 UTC
    As Andrew Hunt and David Thomas wrote in The Pragmatic Programmer:
    Tip 26: "select" Isn't Broken.

    We worked on a project where a senior engineer was convinced that the select system call was broken on Solaris. NO amount of persuasion or logic could change his mind (the fact that every other networking application on the box worked fine was irrelevant). He spent weeks writing work-arounds, which, for some odd reason, didn't seem to fix the problem. When finally forced to sit down and read the documentation on select, he discovered the problem and corrected it in a matter of minutes. We now use the phrase "select is broken" as a gentle reminder whenever one of us starts blaming the system for a fault that is likely to be our own.