in reply to Those fork()ing flock()ers...

And why don't you simply write more than one file (one per process)?

I guess this might have even better performance (caching can occur between writes and log lines are definitely not going to be cut by each other).

Later on, after processing is done, you can either merge all the files in a single one or process the individual files.

Depending on your actual application, this approach might be faster than doing syswrite() each time.

Good luck.

Replies are listed 'Best First'.
Re: Re: Those fork()ing flock()ers...
by ferrency (Deacon) on Dec 05, 2001 at 19:42 UTC
    The fact that the answer to your initial question is, "Because it's easier the way I'm doing it now" tells me that I'm putting too much effort into squeezing a tiny speed increase out of this system :)

    Parallel::ForkManager is really good at controlling the maximum number of forked children, when you're trying to fork multiple children to solve parts of a large problem in parallel. Unfortunately, my particular problem has a Lot of medium-sized parts, instead of a few large parts (They each wait for network responses, but none do much in the way of calculation; I'd rather be waiting for 10 network results at a time instead of each one sequentially).

    Actually, now that I think of it, I could easily break my list of 10000 medium-sized problems into 10 lists of 1000 problems each, and feed each one to a child. This would be much more efficient than forking 10000 children, 10 at a time, each solving only one problem. And it would also be quite easy :)

    Thank you for your suggestion!

    Alan