jabowery has asked for the wisdom of the Perl Monks concerning the following question:

I've got two processes that both write to the same file using the same sequence:
open A,">>alldata.log" || die ("Cannot open file: $!"); flock(A, LOCK_EX); seek(A, 0, SEEK_END) || die ("Cannot seek: $!"); print A $logstring; close A

Occasionally, I find that a long $logstring from one process will be interrupted by a $logstring from the other process and, at the new-line of the interrupting $logstring, the long $logstring will resume.

I thought that flushing was handled properly automatically.

PS: This happens with or without the seek.

Replies are listed 'Best First'.
Re: flock seek flush
by Loops (Curate) on Jul 18, 2013 at 01:18 UTC

    Flushing is handled automatically for you when locking or unlocking. It would be more clear if you explicitly issued the LOCK_UN above instead of relying on proper ordering of that step inside of close(). It would also be prudent to test & die() on the locking and unlocking steps.

    Having said all that, the code you posted should really work as-is as long as you're not writing to a network filesystem. Are you 100% sure that the processes in question are writing the $logstring atomically in one operation? Is it possible that they're reading from a network feed themselves and writing incomplete data to your log?

      The source of the interrupted $logstring is LWP::Simple::get. Its hard to imagine how that could produce the phenomenon you're describing. Moreover, the $logstring has a DateTime->now prepended and that DateTime->now appears only once in the longer $logstring -- where it is supposed to appear.