Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hello, I try to avoid blocking reads to a FIFO with using select. It works, but only if I first write something to the FIFO. Is it the normal behaviour, or do I still something wrong?
my $s = IO::Select->new; my $dxc_fifo="/tmp/dxc_fifo"; my $fifo_str; open my $fifo_fh, "+>", "$dxc_fifo" or die "could not open $dxc_fifo\n"; $s->add($fifo_fh); print $fifo_fh "START\n"; my $i=0; while (1) { sleep 1; $i++; print "loop #$i \n"; my @files = $s->can_read(.25); if (@files) { for my $fh (@files) { my $line = <$fh>; if ($line) { print "from pipe: $line"; } } } }
When I open FIFO with "<" and omit the print "START", loop starts running only after printing something from external program to the FIFO. But I want the loop to run immediately. Thanks, A.

Replies are listed 'Best First'.
Re: First SELECT of FIFO still blocked ?
by Corion (Patriarch) on Dec 16, 2015 at 13:31 UTC
    my $line = <$fh>;

    This will block. You should not mix non-blocking and blocking I/O when trying to stay non-blocking.

    select will only tell you if there is at least one byte readable on a handle. It doesn't tell you how many bytes are actually available on the handle wihtout blocking.

Re: First SELECT of FIFO still blocked ?
by Apero (Scribe) on Dec 16, 2015 at 22:05 UTC

    In addition to the <> operator used blocking and relying on perlio as an earlier reply noted, your open() call will block until the FIFO is opened. To avoid blocking on that, you'd probably need some kind of SIGALRM handler and an alarm() timeout, like this:

    my $pipe; eval { local $SIG{ALRM} = sub { die "alarm\n" }; alarm(5); open( $pipe, "<", $pipe_path ) or die "pipe open failed: $!"; alarm(0); }; if ($@) { die unless ($@ eq "alarm\n"); # non-alarm failure. die "timeout on pipe open"; }

    Remember that with unbuffered reads (you'd need sysread() for that) you don't get neat breaks on newlines, so you'd want to test for "\n" characters in your result. I've given some very simple example code below that only reads a single character, but this is highly inefficient because it causes a system read(2) call for every character read.

    Smarter is to do chunked reads of up to a reasonable block size (choices between 1-8 KiB of data are common,) then scan that for a newline, printing out each segment and moving the buffer forward that much. However, I've opted to ignore those kind of performance details to keep the example below simple. You'd presumably want to do so if you cared at all about performance under any kind of real load.

    Here's a trivial server/client example for a unbuffered read on a pipe. Note that the client (the writer) is buffered out of convenience, and that the server "idles" when the select(2) call has nothing to do. Typically in an unbuffered read loop you're busy doing other tasks.

      Thanks, very interesting stuff, I'll try to understand it.