The alternative method you suggest is the solution I'm currently using. It works just fine, but I don't like having to reopen the files all the time. I'm probably just picky :)
Thanks,
Alan
Update: Ah, sorry for the misunderstanding. Someone else suggested the same alternative in another thread (open each file only once, and flock many times, in each child).
I think the best solution is a combination of that, with a redistribution of the problem space across my child processes. Instead of sending each child only one sub-problem and then letting it die, I could avoid one fork per sub-problem by giving each of my N children 1/Nth of the problem space all at once. If I do that, opening the file once per child will make a lot more sense, because each child will be doing a lot more writing. As it is now, each child only writes a few lines, and does not always write to every possible file, so there may be more opens if I open every file once per child, than if I continued to open each file only when I append a line :)
Thanks!
In reply to Re: (tye)Re2: Those fork()ing flock()ers...
by ferrency
in thread Those fork()ing flock()ers...
by ferrency
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |