markh has asked for the wisdom of the Perl Monks concerning the following question:

I have written an application which uses IO::Socket to send data back and forth between client computers and a central server. I have been noticing that periodically, there will be some sort of a network glitch (we are sending quite a bit of data at times, so these connections can take a while), and so the connection ends up getting hung. I'd like to add some code to both the client end and the server end so it will essentially give up and close the connection after a specific amount of time. I've looked at using alarm and setting up signal handlers on both ends, but I'm wondering if this is the best method, or if anyone knows of a better solution.

Ideas??

Replies are listed 'Best First'.
Re: IO::Socket Timeouts
by mr_mischief (Monsignor) on Oct 10, 2006 at 19:22 UTC
    There are a few ways to handle this, and it partly depends on what you're wanting. Chances are your hang is due to the system blocking your read call when there's nothing to be read. Any way you can keep this from happening will probably work.

    If you go with alarm(), I'd probably either not handle the signal or just use the handler to clean up a bit. There are better ways to handle the case of nothing to read on a socket. Still, if your network glitches are infrequent but usually last a while, alarm() is probably your best bet. No reason to waste cycles and memory for a process to try to recover if the network's not back up soon.

    If your network glitches are short, using nonblocking reads or the select() function may be your best option. With a nonblocking read, you just check to see if you're getting as much data as you expected, and loop with a sleep() or something until you do. The select() function (four-argument version) lets you see if there's anything to read before trying the read. If there's nothing to read, your program can log that info and sleep a while. You'd still want to check the amount of data you're getting once you do perform a read.


    Christopher E. Stith
      The main problem is that the perl process which runs on the client machine will appear to sit there forever if there was some sort of a network issue. I'd like to have it essentially timeout so it can reset itself and try the connection again in a few minutes.

      The general concept here is that these clients periodically connect to the server to send/receieve any data that needs to be transferred. Right now, if there is any kind of a network glitch during a transfer (which seems to happen with a few of these clients that have dodgy DSL lines), the client will sit there forever. I'm trying to come up with a relatively clean way for the client machine to realize it has taken way too long, and that it should give up and just attempt a new connection during its next update window.
        Well, then I'd go with alarm on the client side, and re-set the alarm after each successful read or send. Then there's the is_connected method of IO:Socket. In the handler, I'd check with $socket->is_connected, shutdown the socket and reconnect. I can think of no better way to handle forever_futile than alarm().

        --shmem

        --shmem

        _($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                                      /\_¯/(q    /
        ----------------------------  \__(m.====·.(_("always off the crowd"))."·
        ");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
Re: IO::Socket Timeouts
by hawtin (Prior) on Oct 11, 2006 at 07:30 UTC

    Another alternative that is commonly used is a select() for example:

    my $timeout = 30; my $fdset = ""; vec($fdset, $connect->fileno, 1) = 1; while (1) { my $buffer; my $n = select($fdset,undef,undef,$timeout); if($n < 0) { carp("Select failed $n\n"); return undef; } elsif($n == 0) { carp("Timout expired\n"); return undef; } my $count = sysread($connect,$buffer,$buffer_size,0); # Should check the sysread's return value # Do we have a complete request? $data .= $buffer; last if(request_complete(\$data)); }