Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

this is more academic question, since I've found that an IPC module does what I needed.

basically i wanted to execute a system command and get its stderr, stdout, and return value. so i invoke the command with backticks in two different subroutines, one of which redirects STDERR to a variable, and the other doesnt.

when i dont touch stderr, the invoked command prints the command's stderr to the script stderr. but when i try to capture the script stderr, the command's stderr disappears altogether. this baffles me, because if the script itself sends to stderr (ie: the warn builtin), it gets captured just fine.

#!/usr/bin/perl -w use strict; use Data::Dumper; # execute command without capturing STDERR sub exec_nocap { # execute command via backticks, getting output (STDOUT) and retur +n value my $command = shift; my $out = `$command`; my $return = $?; # dump everything (in an anonymous hash) print Dumper { 'Return value' => $return, 'STDOUT' => $out, }; } # execute command, capturing STDERR sub exec_cap { # dupe STDERR, to be restored later open(my $stderr, ">&", \*STDERR) or do { print "Can't dupe STDERR: + $!\n"; exit; }; # close first (according to the perldocs) close(STDERR) or die "Can't close STDERR: $!\n"; # redirect STDERR to in-memory scalar my $err; open(STDERR, '>', \$err) or do { print "Can't redirect STDERR: $!\ +n"; exit; }; # just to demonstrate that STDERR capturing is working warn('this is a warning'); # execute command via backticks, getting output (STDOUT) and retur +n value my $command = shift; my $out = `$command`; my $return = $?; # restore original STDERR open(STDERR, ">&", $stderr) or do { print "Can't restore STDERR: $!\ +n"; exit; }; print Dumper({ 'Return value' => $return, 'STDOUT' => $out, 'Redirected STDERR' => $err, }); } exec_nocap('perl -e "die(\'this is a fatal error\')"'); print "\n"x5; exec_cap('perl -e "die(\'this is a fatal error\')"');
as i said, this is purely an academic question, but why is this happening?

Replies are listed 'Best First'.
Re: capturing stderr of a command, invoked via backticks
by jdporter (Paladin) on Sep 07, 2007 at 22:33 UTC

    Good question. You need to remember that "redirecting" output (including stderr) to a variable is magic which happens only within the current perl process. It's saying "redirect my stderr to a variable". This does not propagate into subprocesses. (Wouldn't that be nice.) But the current process's real stderr has been turned off, and this fact does propagate into subprocesses.

    As perlop says,

    To read both a command's STDOUT and its STDERR separately, it's easiest to redirect them separately to files, and then read from those files when the program is done.

    Of course, you could also try using IPC::Open3.

    A word spoken in Mind will reach its own level, in the objective world, by its own weight
      As perlop says,
      To read both a command's STDOUT and its STDERR separately, it's easiest to redirect them separately to files, and then read from those files when the program is done.
      Of course, you could also try using IPC::Open3.

      But beware that trying to read both STDOUT and STDERR separately using Perl and IPC::Open3 can be a good way to discover what "deadlock" means. You'd need to use something like select (select doesn't work on pipes on Win32 and the equivalent that does work isn't packaged up for easy use from Perl) or have two threads of execution, so coding this correctly may be much more complicated than using a temporary file or two.

      - tye        

Re: capturing stderr of a command, invoked via backticks
by almut (Canon) on Sep 07, 2007 at 22:45 UTC

    In short, the problem is that your external command (being run via backticks) is assuming that stderr is filedescriptor/fileno 2, while the STDERR filehandle that you reopened to \$err is no longer fileno 2 (it's Perl internal, i.e. fileno -1).

    For a longer explanation, see this node (yes, somewhat different context, but essentially the same issue — just mentally substitute STDERR for STDOUT), or one of the nodes by tye on the issue, e.g. this recent one.

    Update: Here's a code snippet which should work essentially (adapted from the node I was referring to).

    #!/usr/bin/perl # save original STDERR open my $saved_stderr, ">&STDERR"; # create a pipe, which we'll use to read STDERR local(*RH, *WH); pipe RH, WH; # connect the writing side of the pipe to STDERR, with # STDERR being (and remaining) fileno 2 (!) open STDERR, ">&WH" or die "open: $!"; # debug: verify that fileno(STDERR) really is 2 printf "fileno(STDERR): %d\n", fileno(STDERR); # execute external command my $out = `perl -e "print 'hello world'; die('this is a fatal error')" +`; my $ret = $?; # close WH to avoid buffering issues (pipes are buffered) close WH; # read output (one line) # (todo: fix so it doesn't block when there's nothing to read!) my $err = <RH>; close RH; # restore original STDERR open STDERR, ">&", $saved_stderr or die "open: $!"; print "return value: $ret\n"; print "captured stdout: $out\n"; print "captured stderr: $err\n";

    Update 2: I think it's worth adding a word of caution: don't treat this piece of code as a recipe solution. It is mainly meant to illustrate what the problem is with the OP's original code, and that you can in principle get it to work, if you arrange for the different parts to agree on the file descriptor. As it is, there's a potential deadlock situation (also see tye's note).

    Even if you fix things to not block on read (as hinted at in the code comment), there's still the problem that the pipe's system buffer1 may fill up when a lot of output is being sent to stderr. I.e., due to the synchronous execution of the external command, the program might lock up in there (because of not being able to write any longer), before subsequent code gets a chance to empty the buffer...

    The way to handle this properly would be to set up an asynchronous process/thread that takes care of reading the buffer while the external program is still running. However, this would make it quite a bit more complex, so using two temporary files might ultimately be the way to go (as the Perl docs say), if you really need to capture stdout / stderr separately — though, in this case, be careful to create the temp files in a secure way(!)  Alternatively, use IO::CaptureOutput, as suggested by wfsp below (which hopefully does it correctly).

    In other words, think twice before you consider using something like this in production code.

    ___

    1  typically, system (stdio) buffer sizes are 4-16 KB, unless changed with the C lib call setvbuf(). On my Linux box, for example, it defaults to 8 KB.

Re: capturing stderr of a command, invoked via backticks
by wfsp (Abbot) on Sep 08, 2007 at 10:49 UTC
    I hit this snag myself recently and found that IO::CaptureOutput solved the problem very neatly.
Re: capturing stderr of a command, invoked via backticks
by erroneousBollock (Curate) on Sep 08, 2007 at 04:13 UTC
    While probably shell-specific, this works for me:

      my @output = `./script 2>&1 | cat`;

    Of course that co-mingles the script's STDOUT and STDERR.

    -David

      Trivia:

      The "| cat" part is superfluous. What's more, if you drop it, then you get a little-known feature of Perl to kick in and you have a solution that doesn't require support from the shell. Unfortunately, it isn't implemented in all builds of Perl. So you can avoid the shell when using a Unixy perl but still need the shell to handle "2>&1" for you in other environments (and so this doesn't work on Win98, if you can even find a copy).

      If you give Perl a command that contains no shell meta characters, then Perl just splits the command on whitespace and execs the specified program itself, skipping the shell, as if you'd passed a list of parameters to exec. The little-known feature is that if you give a command that only contains two shell meta characters and those are the > and the & in 2>&1 and that is at the end of the command, then Perl strips that off the end, splits the rest up, does a quick dup2(1,2) (which makes STDERR a dup of STDOUT), and execs the specified program directly.

      This is a feature of exec in Unixy perls. But Unixy perls implement system and qx by using this same exec code so the feature applies in this case as well.

      - tye        

        The "| cat" part is superfluous.
        Hmmm, you're right of course. I think I formed the habit from shell-isms like so:
        ./script 2>&1 > file
        In that case, without the |cat STDERR is not redirected to file, but that's probably another of my mis-understandings of the mechanism involved.

        -David