in reply to Threading read access to a filedescriptor

No, fork() doesn't share much of anything between processes. The new process inherits copies of everything, including open file descriptors. In particular, fork() never creates shared memory1.

Another thing that isn't shared is the buffers that efficiently reading of a file one-line-at-a-time requires. So using <$fh> isn't going to do a very good job of distributing lines between processes because each process is going to read much more than just the next line and buffer it.

Now the current file position can be shared between file descriptors. My first guess would have been that it wouldn't be shared after a simple fork(), but you seem to imply otherwise. But I think it can be shared between processes and if fork() didn't share that, then I don't know how you'd go about sharing it (perhaps by passing an open file descriptor over a socket?).

To do something like this I'd resort to a pipe with record lengths preceeding each record so that the readers could efficiently read an entire record. This requires a writer process that reads the input file and puts the records onto the pipe. (If the records are always short, then you could just use the behavior of pipes with multiple readers under Unix and have the writer just put the records onto the pipe with a single syswrite() per record and then each reader would get a single record for each sysread() -- but I'd probably avoid that type of fragile solution.)

Alternately you could have the writer process have a separate pipe to each process and pick between the pipes using select() (this would avoid the need for writing the record length in front of each record).

1Well, it probably makes shared read-only memory that gets copied by an exception handler when and if a process ever tries to write to it. But this is really just an optimization trick, not a way to share anything between processes.

        - tye (but my friends call me "Tye")

Replies are listed 'Best First'.
Re: (tye)Re: Threading read access to a filedescriptor
by smferris (Beadle) on Jan 30, 2001 at 01:02 UTC

    The consensus is that the instance of $new is going to cloned rather than shared. Bummer.. However, I don't see this as a problem yet..

    The docs say that open filehandles will be dup-ed so that closing is handled properly. (one doesn't close the other) Although the seek pointer IS shared between the processes.

    My problem is still that I have to lock them from reading at the same instant. Now to implement it.

    ( Keeping in mind I'd really like to stick with default/stock perl modules.)

    My first thought is to use semaphores. But I don't see an easy interface such as pop/shift. Or am I making it too complicated. I'll keep working on it.

    But if someone would like to chime in with an example of locking processes using semaphores, I'd appreciate it! 8)

    Thx, SMF 8)
      "perldoc -f flock"

      But most of the rest of my post still applies.

              - tye (but my friends call me "Tye")