To avoid tromping on a file that is open for output elsewhere, as well as to avoid someone else tromping on your file, you can use flock. And to avoid race conditions, you open, flock, work, unlock, close. In other words, it's not safe to open, test the lock, close, and then later assume that you can re-open and work on the file. By then the file could be used elsewhere.
One issue that is not an issue is unlocking before closing. One might think that since close flushes the file handle one final time, any buffered output might get written after the unlock. Perl deals with this for you by flushing the filehandle upon unlocking. Be sure to read the docs on flock. Locking is tricky, and has to be done right.
Of course this is assuming that other processes are actually locking their logfiles, as they ought to. Well behaved programs will, but there's no guarantee that everything you're looking at is well behaved.
| [reply] |
| [reply] |
If you delete via unlink (as opposed to truncation, for example) and roll via rename (not by copying), you may not have to worry. If you unlink a file that another process is reading, the file will continue to exist until the other process closes the file. This is the basis for an old trick of opening, then unlinking, a file, so it will "disappear" even if the machine crashes. The file can be written and read using the open file handle, even though it has no visible presence in the file system. Similarly, a file that is being appended to by another process can safely be renamed within the file system. The open file is "known" by its inode number, not by the name by which it was opened, and the inode number is unaltered by a local rename.
Don't rely on atime. Many, perhaps most, linux file systems no longer maintain it.
| [reply] |