in reply to Taming multiple external processes

Here is a demonstration of how to set up a pipeline like the one that you want which will give you the pid of the first process in the pipeline. Note that it is slightly complex and is easy to get wrong (I'm going out of my way to make sure that Perl spawns no shells to interpret shell commands):
#! /usr/bin/perl use strict; use IPC::Open2; local *PIPE; local *OUT; my $pid = open(PIPE, "-|", "echo", "hi") or die "Can't echo: $!"; print "The pid of the echo command is: $pid"; <STDIN>; open2(\*OUT, "<&PIPE", "perl", "-ne", "print uc") or die "open2 failed: $!"; print while <OUT>;
You can use ps to verify that before you tie the two programs together the pid that you got from open actually is the pid of the echo process that you launched.

UPDATE: I left out the assignment to pid. Added. (I had included an earlier version of the code, then commented about a later one that demonstrated the pid, then partially updated the code but missed one line. My apologies for any confusion.)

Replies are listed 'Best First'.
Re: Re: Taming multiple external processes
by 87C751 (Acolyte) on Jan 02, 2004 at 09:36 UTC
    Ah... the $pid assignment did get frobbed. After re-adding that, your example works fine. But applying it to my task fails. The server process appears to hang on the open2 call. The child process is started, according to ps, but never starts reading input (according to its STDERR output). When I kill the open() process, the whole server dies with open2: close(main::PIPE) failed:  at ./testserver3 line 30, which seems to say that the PIPE handle couldn't be closed in the parent.

    This is somewhat frustrating.

      You may have copied part of it slightly wrong. When I used a local variable and a named string in open2, it really was important that I did so.

      If you read the documentation for IPC::Open2 you'll see that I'm trying to get Perl to dup the filehandle directly. That is important because a close will fail there because your first process already has unread data that it wants PIPE to read before it exits.

      If that isn't the problem, then you should make your own private copy of IPC::Open3, be sure that you are loading that version (eg with a use lib), and then start inserting debugging statements to figure out where and when it is doing the close.

      Also I will warn you. Depending on how app1 is coded, figuring out how to kill it might not work for you. After all when it forks, its kids might be talking to their STDOUT, which means that the second process in the pipeline is still being talked to. And if the kids ignore the signal when dad dies...

      Another thing to re-check. Does app1 really talk on STDOUT? If it talks on STDERR, or changes its behaviour when it finds itself not on a terminal, it could be very confusing. The output of app1 arguments here | cat 2> /dev/null at the shell might be revealing. (That calls it with output hooked to a pipe, and hides STDERR.)

        It appears that the manner in which app1 is coded may be the culprit. If I change from a network stream to a local file as input, the problem disappears, and even the simple open(FH,"app1 args | app2 args |"); form works as expected (all children and the enclosing shell are killed when I close FH).

        Thus I'm off to visit app1's source. Many thanks for the guidance.

Re: Re: Taming multiple external processes
by 87C751 (Acolyte) on Jan 02, 2004 at 08:47 UTC
    Did this example get frobbed in the posting? $pid is never assigned to or declared, and that <STDIN>; all by itself looks mighty strange. Oh, and it doesn't seem to work. :)

    I think I see what you're aiming at, though.