linxdev has asked for the wisdom of the Perl Monks concerning the following question:

I have this simply tcp proxy that I've written that is experiencing a bit of an issue. Normally I would accept() a connection, fork() a child, and then select() between the client's socket and the remote system I am trying to proxy. This works well. I thought that this time I would do this in one thread under one select(). This works well... When all of my clients are on fast connections. This week I started up a perl script on some remote devices to pull a software update. These devices connect home via PPP over modem using a demand setup. Instead of spending 4 hours to download software they spread that over days grabbing 1M chunks. Assemble and then flash. This keeps the modem pool free. The problem is that the lines are poor at some sites and the connection could be 19.2 or 9600. Most are at 38.4. The write() from the server down (over modem) to the client can block. This obviously slows down the whole process and can slow down the download for other devices. I switched back to a fork() method and of course no more problem, but I would like to make an attempt to fix this anyway. My question here is more of "The Right Way(tm)" then strictly Perl.
while(1) { for my $socket ($ioset->can_read) { if ($socket == $server) { new_connection($server, $remote_host, $remote_port); }else { next unless exists $socket_map{$socket}; my $remote = $socket_map{$socket}; my $buffer; my $read = $socket->sysread($buffer, 512); if ($read) { $remote->syswrite($buffer); } else { close_connection($socket); } } } }
This script is just a stand-in script because these devices can't contact the remote directly. The server side is running a XML-RPC interface, written in Perl, running under lighttpd. For the fetch of pieces those are access via the same web server as real files. Once the slowness hits the XML-RPC calls can timeout because they are so slow. A config save via XML-RPC can take over 5 minutes when this hits and using the child method it is under a minute. I could drop the buffer size of the read. Another option would be to convert to a non-blocking write and keep track of what was written. Then try again. Of course I'll need to change the select logic to go back to waiting after all those buffers have emptied. This would present a problem with devices connecting, but not in the available pool. They would have to wait for the other buffers to be clear before they would be acknowledged.

Replies are listed 'Best First'.
Re: Many sockets under one select
by RonW (Parson) on Oct 29, 2014 at 17:24 UTC

    If you are adventurous enough, you could try threads, but using them is "officially discouraged" (though not (yet) deprecated).

    Lowering the read size, while very simple, will significantly degrade performance.

    Non-blocking writes will add a lot of complexity. I think you will have to select on writability as well as readability.

    If there is known maximum number of clients, you could pre-fork "worker" processes. Your "master" process would listen for connect requests, then "hand" them off to the next available worker. Unfortuantely, it has been many years since I did something like that, so I don't remember how to do the "hand off".

    Unless there are performance or resource reasons to not use the fork-on-demand model, is probably best to just stay with that.

      I had thought the same. I was just looking at possible solutions to make the single thread idea work. Thanks.