I see two scenarios here. One is the one that Annonymous mentioned in his original post, the second is the typical problem of multiple processes. Both are quite valid concerns, but I believe they require two seperate solutions. If closing any file handle to a file clears all locks on it, the there's not much point in using shared locks within the same process. But using shared locks per fcntl can prevent two different processes from stepping on each other's toes. So go ahead and place the locks for the sake of preventing your other scripts from stepping on your toes, but also keep some kind of internal locking mechanism--it has been suggested that a simple hash with file->lock status pairs would do for this. But consider what happens if you have two handles/locks in one process and another process is waiting for an exclusive lock. When one of process A's handles is closed, it clears all of A's locks on the file and B starts working on it, even theough A's second handle is still open. To prevent this, the only solution I see is to be very careful within A in terms of how you design your code. Perhaps you need to keep an internal structure of some sort of filename->glob pairs so you always know when a hanlde is open and can pass the handle from sub to sub rather than needing to open a new handle.