in reply to Re^6: Threads and HTTPS Segfault
in thread Threads and HTTPS Segfault
I'm sorry I could not respond yesterday. HTTP 1.1 connections are persistent by default and HTTP 1.0 has the option of keep-alive to start persistent connections. This should not be a problem as long as the server is fully HTTP 1.1 compatible.
I never considered persistent AJAX connections! I think AJAX could be problematic if you are sending "XMLHTTPRequests" to a server in response to user events. Even if these connections were persistent they would time out after a lack of events. The techniques of Comet or server push use long lived connections where the server does not respond immediately, which would not be necessary with persistent connections. So maybe it's not implemented at all with persistence in mind?
I cleaned up my script and I will update my earlier post of source code. Here is some sample output. The min, max, mean, etc, line shows values in milliseconds while the line above is just regular seconds.
Here I am on my macbook slamming my linode webserver. The front page is a measly 3k.
mclapdawg:921497 juster$ ./webstress.pl Usage: ./webstress.pl [--conmax 5] [--reqs 100] [--desist] <url> mclapdawg:921497 juster$ ./webstress.pl -c 1 -r 100 http://juster.us 9.819 seconds; 100 requests (10.2/sec); 391800 bytes (39901/sec) 88 min; 98 mean; 98 med; 268 max; 17 stdev mclapdawg:921497 juster$ ./webstress.pl -c 1 -r 100 https://juster.us 10.207 seconds; 100 requests (9.8/sec); 391800 bytes (38386/sec) 89 min; 102 mean; 98 med; 537 max; 43 stdev mclapdawg:921497 juster$ ./webstress.pl -c 10 -r 100 http://juster.us 2.552 seconds; 100 requests (39.2/sec); 391800 bytes (153497/sec) 188 min; 246 mean; 242 med; 442 max; 47 stdev mclapdawg:921497 juster$ ./webstress.pl -c 10 -r 100 https://juster.us 2.844 seconds; 100 requests (35.2/sec); 391800 bytes (137771/sec) 195 min; 272 mean; 238 med; 684 max; 104 stdev
With more concurrent connections (using -c) I see that the requests/bytes per second increase while the latency of each connection is increased.
Now I try some more on the server itself. Notice how https performs much poorer! I suspect the latency is so low that the decryption is causing a noticeable delay, being substantial in comparison.
[juster@artemis ~]$ ./webstress.pl -c 50 -r 1000 http://localhost/ + 0.495 seconds; 1000 requests (2019.0/sec); 3918000 bytes (7910531/sec) 4 min; 23 mean; 22 med; 34 max; 4 stdev [juster@artemis ~]$ ./webstress.pl -c 50 -r 1000 https://localhost/ 0.733 seconds; 1000 requests (1365.0/sec); 3918000 bytes (5348030/sec) 22 min; 35 mean; 27 med; 95 max; 19 stdev [juster@artemis ~]$ ./webstress.pl -c 100 -r 1000 http://localhost/ + 0.449 seconds; 1000 requests (2227.8/sec); 3918000 bytes (8728507/sec) 7 min; 41 mean; 43 med; 46 max; 6 stdev [juster@artemis ~]$ ./webstress.pl -c 100 -r 1000 https://localhost/ 0.740 seconds; 1000 requests (1351.1/sec); 3918000 bytes (5293651/sec) 46 min; 68 mean; 52 med; 124 max; 25 stdev
Next a 4MB file download from my MacBook:
mclapdawg:921497 juster$ ./webstress.pl -c 5 -r 100 http://juster.us/j +unk 221.865 seconds; 100 requests (0.5/sec); 418170400 bytes (1884793/sec) 10087 min; 10899 mean; 17228 med; 9985 max; 3626 stdev mclapdawg:921497 juster$ ./webstress.pl -c 5 -r 100 https://juster.us/ +junk 224.339 seconds; 100 requests (0.4/sec); 418170400 bytes (1864012/sec) 10036 min; 10954 mean; 5728 med; 9964 max; 4919 stdev
And the same on the server itself:
[juster@artemis ~]$ ./webstress.pl -c 5 -r 100 http://localhost/junk 2.792 seconds; 100 requests (35.8/sec); 418170400 bytes (149798160/sec +) 58 min; 136 mean; 125 med; 229 max; 30 stdev [juster@artemis ~]$ ./webstress.pl -c 5 -r 100 https://localhost/junk 40.684 seconds; 100 requests (2.5/sec); 418170400 bytes (10278470/sec) 509 min; 1993 mean; 2016 med; 2789 max; 246 stdev
Wow that's a big hit! On the server the requests are bogged down even more when requesting the 4MB file. Perhaps the decryption is done all at once? My curiosity isn't great enough to dig into Net::SSLeay.
|
|---|