in reply to Problem with passing my script's STDIN to child

You should try

open STDIN, "<&", $c;
which does a dup2 to copy the handle $c to the STDIN. This way filehandle 0 of perl (the real stdin in the eyes of unix, not only the STDIN glob of perl) will be associated to perl. Stdin normally has the FD_CLOEXEC flag unset, so it will be passed to the child process.

If you don't have to use stdin only pass a filehandle, you can do this: Clear the FD_CLOEXEC flag like

fcntl $c, F_GETFD, 0; fcntl $c, F_SETFD, $x&~FD_CLOEXEC;
so that it will be passed to the child. Than somehow tell the number fileno($c) to the child process and the child can read from that filehandle.

Update: Does this post make sense? The whole point is, you should write open STDIN, "<&", $c; instead of *STDIN = $c; because the latter affects only perl's idea of stdin.

Replies are listed 'Best First'.
Re^2: Problem with passing my script's STDIN to child
by suaveant (Parson) on Aug 23, 2004 at 15:37 UTC
    Excellent, thank you. Not only did this fix the problem, but it helped me expose a bug where I wasn't cleaning up my handles properly.

    I had to use open(STDIN, '<&='.fileno($c)) because I am still in perl 5.6.1... sigh.

    The sad thing is I was already using this method for STDOUT... don't know why I didn't use it for STDIN as well...

                    - Ant
                    - Some of my best work - (1 2 3)

Re^2: Problem with passing my script's STDIN to child
by amw1 (Friar) on Aug 21, 2004 at 18:09 UTC
    Does this have advantages over the IO::Pipe method? (genuinly curious, not being difficult :)

    It's been a good 10 years since I covered this in my OS classes and I haven't really had to use it since.

      Yes, certainly.

      If you copy the input from the stdin through a pipe, that's trivially less efficent than if you dupe the filehandle. In the second case, the new program just reads directly from whatever file stdin was connected to. With a pipe, you need a separate process that has to copy data from stdin to the pipe every time the second program needs data; under linux a pipe has a buffer of length 4096 by default, so the os has to do a task switch to the first process and back to the second one for every 4K bytes read (it is possible that some of the overhead can be avoided with the sendfile syscall). Also, the first process occupies memory and a process table entry. In addittion, you have to take care not to block data with too much buffering; also the pipe solution might not even work on win32.