Beefy Boxes and Bandwidth Generously Provided by pair Networks
Come for the quick hacks, stay for the epiphanies.
 
PerlMonks  

Re^2: Synchronizing STDERR and STDOUT

by Ovid (Cardinal)
on Sep 21, 2006 at 10:43 UTC ( [id://574098]=note: print w/replies, xml ) Need Help??


in reply to Re: Synchronizing STDERR and STDOUT
in thread Synchronizing STDERR and STDOUT

That's been suggested, but it doesn't work :(

From perlfaq8:

Note that you cannot simply open STDERR to be a dup of STDOUT in your Perl program and avoid calling the shell to do the redirection. This doesn't work:

open(STDERR, ">&STDOUT"); $alloutput = `cmd args`; # stderr still escapes


This fails because the open() makes STDERR go to where STDOUT was going at the time of the open(). The backticks then make STDOUT go to a string, but don't change STDERR (which still goes to the old STDOUT).

Cheers,
Ovid

New address of my CGI Course.

Replies are listed 'Best First'.
Re^3: Synchronizing STDERR and STDOUT
by shmem (Chancellor) on Sep 21, 2006 at 11:27 UTC
    That's one specialty for backticks only. With backticks, a new filehandle is allocated into which the STDOUT of the subprocess is diverted. But the STDERR of the subshell goes to your STDOUT.

    Yes, the redirect has to be done in the source process, unless you patch your kernel with a MacFilehandle patch (three button -> one button :-) which lumps STDOUT and STDERR together at will.

    Within the same perl process filehandles it's all fine:

    #!/usr/bin/perl -w use strict; # $Id: blorfl.pl,v 0.0 2006/09/21 11:11:11 shmem Exp $ print "foo"; warn "warn"; print "\n"; __END__
    qwurx [shmem] ~> perl -e 'open(STDERR,">&", STDOUT); do "blorfl.pl"' 1 +>/dev/null qwurx [shmem] ~> perl -e 'open(STDERR,">&", STDOUT); do "blorfl.pl"' 2 +>/dev/null foo warn at blorfl.pl line 5.

    But a subprocess invoked has two brand new filehandles for STDOUT and STDERR, which happen to be connected to the same filehandle in the parent (which the subshell doesn't know), but the process is free to buffer at lib. You have to do something with the source process, at least to have it make STDOUT unbuffered if you want the two streams in synch.

    qwurx [shmem] ~> perl -le 'open(STDERR,">&", STDOUT); system "perl blo +rfl.pl"' 1>/dev/null qwurx [shmem] ~> perl -le 'open(STDERR,">&", STDOUT); system "perl blo +rfl.pl"' 2>/dev/null warn at blorfl.pl line 5. foo

    While redirection works as expected, note the reverse order of 'warn' and 'foo' due to buffered STDOUT.

    <update>
    BTW, the FAQ entry you quoted should read like this for clarity

    This fails because the open() makes STDERR go to where STDOUT was going at the time of the open(). The backticks then make the subshell's STDOUT go to a string, but don't change the subshell's STDERR (which still goes to the old STDOUT).
    <update>

    --shmem

    _($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                                  /\_¯/(q    /
    ----------------------------  \__(m.====·.(_("always off the crowd"))."·
    ");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
Re^3: Synchronizing STDERR and STDOUT
by nothingmuch (Priest) on Sep 21, 2006 at 11:10 UTC
    You can split the fork and the exec up if open mashes them together too much.
    pipe CHILDREAD, CHILWRITE; defined( my $pid = fork ) or die "fork: $!"; if ( $pid ) { # read on CHILDREAD; } else { open STDERR, ">&CHILDWRITE"; open STDOUT, ">&CHILDWRITE"; exec( $somecmd ); }
    This is precisely the type of plumbing that a shell will do when you say 2>&1, except without the unportable syntax ;-)

    That said, IPC::Run and friends already abstract all of this out, so there's no need to reinvent the wheel.

    -nuffin
    zz zZ Z Z #!perl

      Forking isn't portable. Windows only emulates it and it doesn't always do this well.

      Update: Also, I don't believe this guarantees that the streams will remain in synch. I've had plenty of problems with Test::Builder output getting corrupted when Test::Harness spits it out and if they must be completely in synch or my code fails.

      Cheers,
      Ovid

      New address of my CGI Course.

        You missed my point.

        The shell will simply do a dup of 2>&1 like i illustrated. The problem is not in the fork, etc, but in that the sugar layer to do this dup, namely 2>&1 involves reparsing the command line, etc, and is thus undesirable and unportable.

        Instead of staying one level above backticks and shell redirects, you can simply do what they do.

        As for fork vs no fork - look at how IPC::Run does it. It's portable to windows and supports redirects.

        WRT synching - i am not sure (this may be impl dependant) but if there are two separate buffers for the predupped handled then they won't be in synch unless you auto flush in the child. If two dupped file descriptors will share a buffer then they will be completely synchronized, and in either case this is not going to be any different than what 2>&1 will give you.

        -nuffin
        zz zZ Z Z #!perl
Re^3: Synchronizing STDERR and STDOUT
by xdg (Monsignor) on Sep 21, 2006 at 15:58 UTC
    This fails because the open() makes STDERR go to where STDOUT was going at the time of the open(). The backticks then make STDOUT go to a string, but don't change STDERR (which still goes to the old STDOUT).

    So don't use backticks. Redirect STDOUT to a file, redirect STDERR to STDOUT and use system() instead.

    use strict; use warnings; use File::Temp; my $temp_stdout = File::Temp->new; local *OLDOUT; local *OLDERR; open( OLDOUT, ">&STDOUT" ); open( OLDERR, ">&STDERR" ); open( STDOUT, ">$temp_stdout" ); open( STDERR, ">&STDOUT" ); # Funky quoting for Windows. Sigh. system('perl -e "print q{to stdout}; warn q{to stderr}; print q{more t +o stdout}'); close(STDOUT); open(STDOUT, ">&OLDOUT"); open(STDERR, ">&OLDERR"); open CAPTURED, "<$temp_stdout"; my $capture = do { local $/; <CAPTURED> }; close CAPTURED; print "Got this:\n$capture";

    That still doesn't solve the problem of keeping them in sync because the subprocess still has two buffered handles. The fact that they go to the same place doesn't matter. You need to get the child process to turn off buffering.

    -xdg

    Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://574098]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (5)
As of 2024-04-25 11:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found