File descriptor 1 may be closed, which would cause open STDOUT, ">&=1" to fail
I'm aware of that — which is why I mentioned that you
shouldn't check for errors here, but let the call just fail silently. I also
mentioned that if file descriptor 1 is closed, the dup behind the
subsequent open will pick the then free file descriptor 1 anyway, because it's
the lowest available (this is the way dup works — and this is also why you need "&" and not
"&=" in that open statement).
The idea behind the open STDOUT, ">&=1" statement is simply
to make sure STDOUT is associated with file descriptor 1 (to trigger
the "special" behavior of open I mentioned, which results in
dup'ing the descriptor of the child's side of the pipe to
descriptor 1). This will happen either way, when the call
succeeds or when it fails.
If file descriptor 1 is not closed and it is not STDOUT, then it is
probably attached to some other unrelated file handler, say FOO. The
xclose call will affect both STDOUT and FOO as they share the same file
descriptor, breaking any code using FOO on the parent process.
Not sure what xclose you're referring to, and why you're worried
about breaking a file descriptor in the parent.
Closing a file descriptor in the child does not render the parent's
descriptor dysfunctional (actually, it's a pretty common and healthy
practice to close unneeded dups of file descriptors after a fork).
Try this and you'll see what I mean:
#!/usr/bin/perl -w
use strict;
close STDOUT;
open FOO, ">", "/dev/tty" or die $!;
printf STDERR "fileno(FOO): %d\n", fileno(FOO);
open STDOUT, ">", "dummyfile" or die $!;
pipe my $rdr, my $wtr;
printf STDERR "fileno(pipe-r): %d\n", fileno($rdr);
printf STDERR "fileno(pipe-w): %d\n", fileno($wtr);
if (fork) {
close $wtr;
my $r = <$rdr>; chomp($r);
print STDERR "r = <<$r>>\n";
print FOO "FOO still working\n";
} else { # child
close $rdr;
printf STDERR "[child] fileno(STDOUT) initially: %d\n", fileno(STD
+OUT);
# comment this line out (and edit "&=" below), and you'll see echo
+ will no longer write to the pipe
open STDOUT, ">&=1"; printf STDERR "[child] fileno(STDOUT) after
+&=1: %d\n", fileno(STDOUT);
open STDOUT, ">&".fileno($wtr) or die $!;
printf STDERR "[child] fileno(STDOUT) finally: %d\n", fileno(STD
+OUT);
exec "/bin/echo", "foobar";
}
__END__
fileno(FOO): 1
fileno(pipe-r): 4
fileno(pipe-w): 6
[child] fileno(STDOUT) initially: 3
[child] fileno(STDOUT) after &=1: 1
[child] fileno(STDOUT) finally: 1
r = <<foobar>>
FOO still working
The general issue is that the child's side of the pipe must be accessible via file descriptor 1 before the exec, otherwise no normal exec'ed program (echo here) will send its standard output to it.
I've strace'd the system calls Perl issues under the hood in the various cases, and I can't see any problem with what's happening due to the extra open STDOUT, ">&=1" statement.
(Note that I'm addressing the Unix side of the issue only.) |