hagus has asked for the wisdom of the Perl Monks concerning the following question:

I'm having some interesting difficulties with those four little items.

My aim is to capture the stdout and stderr from an exec()ed program, but not just interleaved into one stream. So I'm doing something like:

pipe READOUT, WRITEOUT; if (my $pid = fork) { close(WRITEOUT); # try to read from READOUT, which fails. } else { close(STDERR); close(STDOUT); open(STDOUT, ">&WRITEOUT"); # This next line makes no difference. select(STDOUT); $|=1; exec("some_application"); }
The problem is that if some_application buffers stdout, I don't seem to get *anything* back on READOUT. I only get stuff back when I print heaps (ie. exceeding the 4096 buffer I guess). Newlines don't force a buffer flush. If I set $|=1 in some_application, everything is fine!

This is no good though, because I might not have access to the source if some_application is not a perl app. And I don't want to have to modify things like that. So what has failed here?

I've spent way too much time on this - but it's bugging me. It's as if I need a way to force some_application to not set its stdout to buffered mode, or in the parent block above, somehow post exec() set some_application's stdout to non-buffered.

Any hints?

--
Ash OS durbatulk, ash OS gimbatul,
Ash OS thrakatulk, agh burzum-ishi krimpatul!
Uzg-Microsoft-ishi amal fauthut burguuli.

Update 2002-04-08 by mirod: changed <pre> tags to <code> tags

Replies are listed 'Best First'.
Re: Pipe, fork, exec and red-hot pokers.
by rob_au (Abbot) on Apr 08, 2002 at 06:06 UTC
    While not a solution to your problem per se, I might suggest another approach to the redirection of command output using IPC::Open3 - The following snippet of code, duplicates the STDOUT and STDERR streams and calls open3 to execute the command, directing the output into the respective streams.

    use Carp; use IO::Handle; use IPC::Open3; use strict; my $STDOUT = IO::Handle->new; my $STDERR = IO::Handle->new; open($STDOUT, ">&STDOUT") || croak( 'Cannot duplicate STDOUT to file handle - ', $! ); open($STDERR, ">&STDERR") || croak( 'Cannot duplicate STDERR to file handle - ', $! ); eval { open3( '<&STDIN', $STDOUT, $STDERR, "some_application" ) || die $!; waitpid(-1, 0); }; croak( 'Cannot execute command - ', $@ ) if $@; # The STDOUT and STDERR output of some_application # execution now resides in $STDOUT and $STDERR # respectively. eg. print $_ foreach <$STDOUT>

    As always with Perl, TMTOWTDI.

     

      This doesn't work either - if my program prints something to stdout and runs *forever* (ie. a daemon) then I will get no output. If I put while (<$STDOUT>) just after the open3, then everything is silent.

      Unless my some_application itself sets $|=1 ... then I get output from it. If it's still buffered, I have to wait until the stdout buffer is full or the program terminates.

      What I want is to read the stdout of some_application, buffered or no, in real time. The key question is becoming: 'why isn't some_application flushing its buffer on newline?'

      Comments? Please? :)

      --
      Ash OS durbatulk, ash OS gimbatul,
      Ash OS thrakatulk, agh burzum-ishi krimpatul!
      Uzg-Microsoft-ishi amal fauthut burguuli.

        Interesting ... One thing which I would be looking to try at this point would be to incorporate autoflush on the created duplicate handles prior to the open3 invocation. eg.
        my $STDOUT = IO::Handle->new; my $STDERR = IO::Handle->new; open($STDOUT, ">&STDOUT") || croak( 'Cannot duplicate STDOUT to file handle - ', $! ); open($STDERR, ">&STDERR") || croak( 'Cannot duplicate STDERR to file handle - ', $! ); $STDOUT->autoflush; $STDERR->autoflush;

        If you still have no joy with this, you may have to incorporate a flush of the respective buffer prior to reading from it. eg. $STDOUT->flush. This may be necessary if some_application is actively buffering its output.

        I'd be very interested to hear how you go with this and see if this resolves your problems with your specific application.

         

Re: Pipe, fork, exec and red-hot pokers.
by jeffenstein (Hermit) on Apr 08, 2002 at 06:08 UTC

    My guess is that you want to look at either IPC::Open3 or Expect.

    With Expect, the new program will be on a tty, so it will be line buffered, however your stdout and stderr will be mixed together.

    With IPC::Open3, the stdout and stderr will be seperated, but stdout won't be line buffered.

    However, as it says in the IPC::Open3 docs, you can use IO::select to do non-blocking I/O on the resulting filehandles.

SOLUTION
by hagus (Monk) on Apr 09, 2002 at 04:27 UTC
    The problem in the end appeared to be that perl expects STDOUT to be a tty (my some_application that I was exec()ing was in fact a perl app ... but I wanted a general solution that didn't need to think about that).

    When STDOUT isn't a tty, it turns off line buffering. Hence my requirement to snarf STDOUT in real time was thwarted, until I figured the following out.

    This code snipped forks and execs an application, using select to wait for available output on the stderr and stdout for the exec()ed application. The streams are not interleaved.

    use IO::Pty; use IO::Select; my $pty = new IO::Pty; pipe($readerr, $writeerr); if (my $pid = fork) { close($writeerr); my $select = new IO::Select; $select->add($pty); $select->add($readerr); while (1) { foreach my $fh ($select->can_read) { my $buf; if (sysread($fh, $buf, 4096)) { print "Read ... $buf ...\n"; } } } } else { close($readerr); $pty->make_slave_controlling_terminal(); my $slave = $pty->slave(); close $pty; $slave->clone_winsize_from(\*STDIN); $slave->set_raw(); open(STDOUT, ">&" . $slave->fileno); open(STDERR, ">&" . $writeerr->fileno); close($slave); exec("/home/hagus/foo.pl"); }
    --
    Ash OS durbatulk, ash OS gimbatul,
    Ash OS thrakatulk, agh burzum-ishi krimpatul!
    Uzg-Microsoft-ishi amal fauthut burguuli.
      It turns out that non-line buffering when stdout isn't a tty is standard UNIX practice. So it's not a perl specific wrinkle. Try 'man stdio' on your system (solaris has a particularly detailed page on this).

      So other options might have included using setvbuf under perl (POSIX::setvbuf?) to manually set the stdout descriptor to line buffered. Of course, that would be useless if the first thing that the exec()ed perl program did was test stdout for tty-ishness and revert to non-line buffered ...

      HTH!

      --
      Ash OS durbatulk, ash OS gimbatul,
      Ash OS thrakatulk, agh burzum-ishi krimpatul!
      Uzg-Microsoft-ishi amal fauthut burguuli.

Re: Pipe, fork, exec and red-hot pokers.
by hagus (Monk) on Apr 08, 2002 at 06:17 UTC
    I know there's Expect and IPC::Open3, but what I am doing *should* work, I believe.

    Any if those other options work - why?! I am guessing that underneath they're probably doing the same thing I am. The pragmatic me says 'try the other options', the pedantic me says 'yes, but *why* doesn't this current method work?'.

    --
    Ash OS durbatulk, ash OS gimbatul,
    Ash OS thrakatulk, agh burzum-ishi krimpatul!
    Uzg-Microsoft-ishi amal fauthut burguuli.

      This may be a long shot, but *how* are you reading that output? are you using the diamond operator, or are you reading via select(IO::Select) + read/sysread ? It may not have anything to do with your problem at all, but who knows...

      FWIW, something like the following works for me

      # note: this code will go into infinite loop # and is not a particularly good piece of code... use IO::Handle; use IO::Select; my( $readfh, $writefh ) = ( IO::Handle->new(), IO::Handle->new() ); pipe( $readfh, $writefh ); if( my $pid = fork() ) { $writefh->close(); my $select = IO::Select->new( $readfh ); my $buf; while( 1 ) { if( $select->can_read( 1 ) ) { if( read( $readfh, $buf, 4096, 0 ) ) { print "got '$buf'"; } } } } else { $readfh->close; open( STDOUT, sprintf( '>&%d', $writefh->fileno ) ); STDOUT->autoflush(1); exec( 'find', '/usr/local/lib/perl5' ); }
        First, I changed 'read' to 'sysread'. But if the program you are exec()ing is this:
        #!/usr/bin/perl my $i=0; while (1) { print "stdout " . $i++ . "\n"; sleep 1; }
        Then you will find that no output is captured by read. The perl program must set $|=1, which is exactly what I don't want to have to worry about.

        --
        Ash OS durbatulk, ash OS gimbatul,
        Ash OS thrakatulk, agh burzum-ishi krimpatul!
        Uzg-Microsoft-ishi amal fauthut burguuli.