in reply to Re: (tye)Re: Those fork()ing flock()ers...
in thread Those fork()ing flock()ers...

Yes, appending is atomic. From "man 2 write":

If the O_APPEND flag of the file status flags is set, the file offset will be set to the end of the file prior to each write and no intervening file modification operation will occur between changing the file offset and the write operation.

A partial syswrite is possible, but, for the case of "regular" files, means that writing the rest of the data is going to fail anyway (unless the resource exhaustion that caused the initial partial write is resolved in the interim).

I was about to update my node with the following alternative when I noticed your reply. Simply reopen the file once in each child and use flock as usual. The reason that flock doesn't work is because the file descriptors are all duplicates of each other. The documentation I was able to find on flock really sucked at explaining it (as far as I'm concerned, the Linux version was simply incorrect). But if that didn't work, then flock would be useless. (:

(Updated to add "once" above.)

        - tye (but my friends call me "Tye")
  • Comment on (tye)Re2: Those fork()ing flock()ers...

Replies are listed 'Best First'.
Re: (tye)Re2: Those fork()ing flock()ers...
by ferrency (Deacon) on Dec 05, 2001 at 19:26 UTC
    I wasn't sure what conditions could cause a "partial write." If it's only a disk-full condition, and not a hardware-buffer-full condition or something, then syswrite will probably do what I want in all the cases I care about. Thanks!

    The alternative method you suggest is the solution I'm currently using. It works just fine, but I don't like having to reopen the files all the time. I'm probably just picky :)

    Thanks,
    Alan

    Update: Ah, sorry for the misunderstanding. Someone else suggested the same alternative in another thread (open each file only once, and flock many times, in each child).

    I think the best solution is a combination of that, with a redistribution of the problem space across my child processes. Instead of sending each child only one sub-problem and then letting it die, I could avoid one fork per sub-problem by giving each of my N children 1/Nth of the problem space all at once. If I do that, opening the file once per child will make a lot more sense, because each child will be doing a lot more writing. As it is now, each child only writes a few lines, and does not always write to every possible file, so there may be more opens if I open every file once per child, than if I continued to open each file only when I append a line :)

    Thanks!