So in this case I need several differnt scripts to all adhere to the same locks.
The behavior you complained about was that when the same process read-locked a file multiple times then unlocked it, all it was unlocked instead of decrementing a lock counter. Solving that problem doesn't require that the different scripts use the same lock discipline.
So does the whole file locking over NFS problem boil down to NFS being poorly designed (or at least being poorly designed for use by more than one client at a time) then?
I don't think it's fair to say it's a problem; you just have to use fcntl-locking, which is a POSIX standard. AFAIK, that's always been the case for portable programs, although flock over NFS may have worked on previous versions of RedHat. It's straightforward to implement your own lock-counting code if you want that behavior, and if you think it's useful clean it up and put it on CPAN.
I don't see any documentation of the lock-counting behavior you describe in the flock(2) manpage. Are you sure you weren't relying on undefined behavior all along?
| [reply] [d/l] [select] |
I see two scenarios here. One is the one that Annonymous mentioned in his original post, the second is the typical problem of multiple processes. Both are quite valid concerns, but I believe they require two seperate solutions. If closing any file handle to a file clears all locks on it, the there's not much point in using shared locks within the same process. But using shared locks per fcntl can prevent two different processes from stepping on each other's toes. So go ahead and place the locks for the sake of preventing your other scripts from stepping on your toes, but also keep some kind of internal locking mechanism--it has been suggested that a simple hash with file->lock status pairs would do for this. But consider what happens if you have two handles/locks in one process and another process is waiting for an exclusive lock. When one of process A's handles is closed, it clears all of A's locks on the file and B starts working on it, even theough A's second handle is still open. To prevent this, the only solution I see is to be very careful within A in terms of how you design your code. Perhaps you need to keep an internal structure of some sort of filename->glob pairs so you always know when a hanlde is open and can pass the handle from sub to sub rather than needing to open a new handle.
| [reply] |
But using shared locks per fcntl can prevent two different processes from stepping on each other's toes.
I think you've misunderstood--that's exactly what it does NOT do. If there are two filehandles open on a given file (regardless of whether they are opened by one process or two seperate processes), when one of them is closed, all locks are removed. So if process A and process B have shared locks on a file, as soon as A closes his handle, B loses his lock. If process C has two handles and two locks on another file, then both locks are cleared as soon as one handle is closed.
| [reply] |