MediaTracker has asked for the wisdom of the Perl Monks concerning the following question:

Greetings my Brothers,

I'd like to create two child processes, pipe the first one's STDOUT into the second one's STDIN, and read the first one's STDERR. After reading the pipe manpage and perlipc I came up with this (broken) piece of code :

my ($READ, $WRITE); pipe ($READ, $WRITE); my $child1 = fork (); unless ($child1) { *STDOUT = $WRITE; exec ('cat', $0); } my $child2 = fork (); unless ($child2) { *STDIN = $READ; exec ('grep', 'grep'); } waitpid ($child1, 0); waitpid ($child2, 0);

I think my error is in the assignments to STDIN and STDOUT. I don't fully understand globs, so what you see here is mainly guesswork. I think what I need is some sort of equivalement to the C system call dup2. Any thoughts?

thank you for your time
MT

Replies are listed 'Best First'.
Re: Pipe two child processes
by Limbic~Region (Chancellor) on Aug 30, 2003 at 14:44 UTC
    MediaTracker,
    I am sure some monk will show you some black magic to accomplish what you are asking for (probably with IPC::Open3). I am not that monk. You have asked to do three things.
  • Create two child processes
  • Make the STDOUT of the first the STDIN of the second
  • Read the STDERR of the first from the parent

    This is actually fairly straight forward if you use files for the location of the STDOUT/STDERR, turn on auto-flush, and perhaps throw in a little File::Tail which is used to read from continously updated files.

  • Step 1, fork off child process number 1.
  • Step 2, change the location of STDOUT to a file in the first child process
    open (SAVEOUT, ">&STDOUT") or die "Unable to copy STDOUT : $!"; open (STDOUT, ">stdout") or die "Unable to open new STDOUT : $!"; select STDOUT; $| = 1;
  • Step 3, change the location of the STDERR to a file in the first child
    open (SAVEERR, ">&STDERR") or die "Unable to copy STDERR : $!"; open (STDERR, ">stderr") or die "Unable to open new STDOUT : $!"; select STDERR; $| = 1; select STDOUT;
  • Step 4, fork off your second child process
  • Step 5, Either open STDIN from the newly create file or use any unique file handle to read from in the second child process
    open (STDINCLONE, "<stdout") or die "Unable to open first process's ST +DOUT : $!";
  • Step 6, open the file for STDERR of the first child process in the parent program as some unique file handle
    open (ERRORS,"<stderr") or die "Unable to open first process's STDERR +: $!";
    Of course if you need to have STDOUT going both to a terminal AND also need to be able to read from it (file) you can still do everything I just described here with IO::Tee.

    Note: The following is assumed:
    1. The child processes are properly fork'd - see perldoc -f fork if needed
    2. The code will be modified appropriately to incorporate the use of File::Tail if required
    3. The two forked child processes are Perl scripts.
    If these are not Perl scripts that you can't modify the source of you can still change the location of the STDOUT by using the > syntax. I am not sure about modifying the location of STDERR on something other than a *nix machine, but 2>stderr would work there. As far as turning on auto-flush for non-Perl script - I don't know.

    Cheers - L~R

Re: Pipe two child processes
by hawtin (Prior) on Aug 31, 2003 at 09:51 UTC

    The way to dup a file handle is to use the open NEWHANDLE ">&OLDHANDLE"; form. Or the open NEWHANDLE ">$=$fd" form. If you have a copy of the Camel book look under open.

    In your case I would guess that the code to use would be:

    pipe READ, WRITE; my $child1 = fork(); unless ($child1) { open STDOUT ">&WRITE"; exec ('cat', $0); } my $child2 = fork (); unless ($child2) { open STDIN "<&READ" or die "Cannot open read filehandle\n"; exec ('grep', 'grep'); } waitpid ($child1, 0); waitpid ($child2, 0);

    Of course being an unreformed Perl4 hacker I use file handles, there is no doubt a better way to do it.