So you have two options. One is to do a select and a sysread for every individual byte. This is probably easier but very inefficient.
Another is to change STDIN to be nonblocking, and try to sysread large chunks at a time. Keep trying until sysread returns undef, and then check $!{EWOULDBLOCK}. Completely untested:
sub read_available { my ($fh) = @_; my $data = ''; while (1) { my $buf; my $nread = sysread($fh, $buf, 4096); if (defined $nread) { if ($nread == 0) { # End of file, so we have everything we'll ever get return ($data, 1); } else { $data .= $buf; } } else { if ($!{EWOULDBLOCK}) { return ($data, 0); } else { return; # Error } } } } . . . # Set STDIN nonblocking IO::Handle->new_from_fd(fileno(STDIN), "r")->blocking(0); . . . # If select says that STDIN is readable... my ($data, $eof) = read_available(*STDIN) or die "Error reading stdin: $!";
I really should test this once, but hopefully it gives the right idea. Seems like a correct version of this routine ought to be in a FAQ somewhere. Or maybe it is; I didn't check.
In reply to Re^3: forked kid can't read from IO::Select->can_read
by sfink
in thread forked kid can't read from IO::Select->can_read
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |