in reply to Load testing/Simultaneous HTTP requests
Unless you run your loadtest script on a machine with 99 processors, you ae not going to hit the server at exactly the same time regardles of whether you use processes or threads of asyncIO.
Of course, even if you had 99 processors, the variablility in network responses, the bottleneck of the tcpip stack and interface card, congestion delays whether floydd or BEB, all mean that you aren't going to hit at exactly the same time.
You would probably need to run 100 "users" on 10 machines for several thousand cycles before you would come close to 100 simultaneous requests, but you're probably more concerned with handling 100 concurrent requests?
This will come close to your requirements. You may need to adjust the delay factor (0.1), to ensure that 100 threads are all spawned and ready to go at the same time. The simple trace will tell you how close to simultaneous the requests were issued. Your log file will tell you the rest. You could also make it so that it would delay to a specified time of day and (provided your network is timesynced), run multiple copies on different machines to get a more realistic test. Anyway, it's a simple starting point.
#! perl -slw use strict; use threads; use Time::HiRes qw[time sleep]; use LWP::Simple; sub hitEm { my( $url, $when ) = @_; sleep $when - time; printf "%3d : %s\n", threads->tid, time; get $url; } my( $users, $url ) = @ARGV; my $when = time + 0.1 * $users; my @users = map{ threads->create( \&hitEm, $url, $when ); } 1 .. $users; sleep $when - time; $_->join for @users; __END__ c:\test>534459 10 http://news.bbc.co.uk/ 5 : 1141489281.46878 6 : 1141489281.46877 7 : 1141489281.46877 10 : 1141489281.46877 8 : 1141489281.48441 9 : 1141489281.48439 3 : 1141489281.48439 1 : 1141489281.4844 2 : 1141489281.48439 4 : 1141489281.4844
|
|---|