The usual way is to close STDIN in the process which doesn't need it.
Do you really need 2 diferent STDIN ?
anyway, to play with filehandle use TYPEGLOB:
$variable=*FILEHANDLE;
print $variable "things to be printed\n";
or open with the mode '&' (from the cookbook)
open(OLD_OUT,">&STDOUT"); #save curent STDOUT to OLD_OUT
open(STDOUT,">/tmp/file.out"); #reassign STDOUT to /tmp/file.out
close STDOUT ;
open(STDOUT,">&OLD_OUT"); #restore the previous STDOUT
Or use local as gloom suggested.
| [reply] [d/l] [select] |
print STDOUT "hey1";
{
local *STDOUT;
open( STDOUT , ">test" );
print STDOUT "hey2";
}
print STDOUT "hey3";
__________________
Hope this helps | [reply] [d/l] |
I've experienced similar problems with autoflush and
sockets, particularily on Solaris - which seems to have
a write-ahead buffer of ~8 bytes on some streams that
autoflush doesn't touch, I'm guessing to make it send
larger packets. Anyhow, the solution is to use sysread
and syswrite for all IO that you want to be unbuffered
(this would be genuinely unbuffered rather than
automatically flushed). Note that most of perl's IO functions
are buffered and consequently should not be mixed with
sysread/syswrite. In particular, print and eof are buffered
(you can determine end of file by checking the results of
a sysread)
If you are using select or any other low-level IO call (this
includes using IO::Select), then the use of sysread and
syswrite is mandatory.
I do usually add that I've never experienced any problems
under Linux, only under other OSes like Solaris and even
then only sporadically... Sometimes data just seems to get
stuck in a buffer and isn't sent until some more data
comes along.
Andrew.
Update: I worked out the exact reason why autoflush
doesn't work very well in these sort of circumstances.
The docs very specifically say that it only autoflushes
the *output* buffer. If any input buffering is being used,
autoflush will have no effect. Thus, autoflush will make
sure that data you send from your script will not be
buffered, but makes no such guarantees about data you
read. Therefore, it's fine for writing to a pipe
or a socket, but not a good idea for reading from
a pipe or a socket (unless you can be sure that the other
side has closed the pipe/socket).
See the perlvar man page for details. I would guess the
precise behaviour depends on what C library your version
of perl is linked to. I suspect many flush the input buffer
on a newline, which is why many people fail to notice this
particular subtlety. sysread will give you all the data
that's currently available on a given file handle, newline
or no. You'll need to buffer it yourself until a newline
finally does arrive, but at least your script won't randomly
hang while it waits for enough input data to empty the
appropriate buffer. I suspect this is a big gotcha for
many people (first time I came across it, it had me
scratching my head for weeks) | [reply] |
Thank you everyone for your responses - it looks like I've been banging my head against all these walls just to create workarounds. syswrite and sysread seem to have solved the stdio buffer errors as well as all of the problems I was having with the encrypted data being sent across the socket incorrectly. I've tested about 20 times and thus far haven't seen any of the checksums fail (each test involves about 10 encrypted transfers and usually one in every 3 tests requires a Resend from either the client or server). Thank you everyone for your suggestions and assistance, this place really is great.
-Adam
| [reply] |