in reply to Re^2: Bid data but need fast response time
in thread Bid data but need fast response time
I might even keep your "sleep" so that my server processes expire every so often as a memory-leak counter-measure.)
You'll probably want to set to something less than 31 years then :)
I expect somewhere between 1 and 100 simultaneous clients. Each connection should last no more than 1sec with 10k+ queries.
Then I'd go for a thread pool server something like this (again, a cut down version of pre-existing test code, not a production ready code):
#! perl -slw use strict; use threads; use Thread::Queue; use IO::Socket; use constant { SERVERIP => '127.0.0.1', SERVERPORT => 3000, MAXBUF => 4096, }; our $R //= 8; sub s2S{ my( $p, $h ) = sockaddr_in( $_[0] ); $h = inet_ntoa( $h ); "$h:$p"; } my %DB :shared; chomp, $DB{ $_ } = $. while <>; close *ARGV; my $Q = new Thread::Queue; my $Qcleanup = new Thread::Queue; sub responder { my $tid = threads->tid; while( my $fileno = $Q->dequeue() ) { print "[$tid] Servicing fileno: $fileno"; open my $client, '+<&=', $fileno or die $!; bless $client, 'IO::Socket::INET'; while( 1 ) { $client->recv( my $in, MAXBUF ); unless( length $in ) { print "Disconnected from ", s2S $client->peername; shutdown $client, 2; close $client; $Qcleanup->enqueue( $fileno ); last; }; print "Received $in from ", s2S $client->peername; my( $cmd, @args ) = split ' ', $in; if( $cmd eq 'FETCH' ) { $client->send( $DB{ $args[ 0 ] } ); } else { $client->send( 'Bad command' ); } } } } threads->create( \&responder )->detach for 1 .. $R; my $lsn = IO::Socket::INET->new( LocalHost => SERVERIP, LocalPort => SERVERPORT, Reuse => 1, Listen + => SOMAXCONN ) or die $!; my @clients; print "Listening..."; while( my $client = $lsn->accept ) { my $fileno = fileno( $client ); $clients[ $fileno ] = $client; print "[0] queing ", $fileno; $Q->enqueue( $fileno ); close $clients[ $Qcleanup->dequeue ] while $Qcleanup->pending; }
In a test running 8 responders, it served 1000 responses to each of 100 concurrent clients and achieved an average response time at the clients of 0.002 seconds.
That's clients and server running in the same box so no network latency. But on the other hand, that's 8 server threads and 100 client threads all running in the same box which will obviously adversely affect server responsiveness.
Here's a graph of its response times using 8 threads to respond to 16, 32, 64, 100, & 128 concurrent clients.
|
|---|