In short, the problem is that your external command (being run via backticks) is assuming that stderr is filedescriptor/fileno 2, while the STDERR filehandle that you reopened to \$err is no longer fileno 2 (it's Perl internal, i.e. fileno -1).
For a longer explanation, see this node (yes, somewhat different context, but essentially the same issue — just mentally substitute STDERR for STDOUT), or one of the nodes by tye on the issue, e.g. this recent one.
Update: Here's a code snippet which should work essentially (adapted from the node I was referring to).
#!/usr/bin/perl # save original STDERR open my $saved_stderr, ">&STDERR"; # create a pipe, which we'll use to read STDERR local(*RH, *WH); pipe RH, WH; # connect the writing side of the pipe to STDERR, with # STDERR being (and remaining) fileno 2 (!) open STDERR, ">&WH" or die "open: $!"; # debug: verify that fileno(STDERR) really is 2 printf "fileno(STDERR): %d\n", fileno(STDERR); # execute external command my $out = `perl -e "print 'hello world'; die('this is a fatal error')" +`; my $ret = $?; # close WH to avoid buffering issues (pipes are buffered) close WH; # read output (one line) # (todo: fix so it doesn't block when there's nothing to read!) my $err = <RH>; close RH; # restore original STDERR open STDERR, ">&", $saved_stderr or die "open: $!"; print "return value: $ret\n"; print "captured stdout: $out\n"; print "captured stderr: $err\n";
Update 2: I think it's worth adding a word of caution: don't treat this piece of code as a recipe solution. It is mainly meant to illustrate what the problem is with the OP's original code, and that you can in principle get it to work, if you arrange for the different parts to agree on the file descriptor. As it is, there's a potential deadlock situation (also see tye's note).
Even if you fix things to not block on read (as hinted at in the code comment), there's still the problem that the pipe's system buffer1 may fill up when a lot of output is being sent to stderr. I.e., due to the synchronous execution of the external command, the program might lock up in there (because of not being able to write any longer), before subsequent code gets a chance to empty the buffer...
The way to handle this properly would be to set up an asynchronous process/thread that takes care of reading the buffer while the external program is still running. However, this would make it quite a bit more complex, so using two temporary files might ultimately be the way to go (as the Perl docs say), if you really need to capture stdout / stderr separately — though, in this case, be careful to create the temp files in a secure way(!) Alternatively, use IO::CaptureOutput, as suggested by wfsp below (which hopefully does it correctly).
In other words, think twice before you consider using something like this in production code.
___
1 typically, system (stdio) buffer sizes are 4-16 KB, unless changed with the C lib call setvbuf(). On my Linux box, for example, it defaults to 8 KB.
In reply to Re: capturing stderr of a command, invoked via backticks
by almut
in thread capturing stderr of a command, invoked via backticks
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |