in reply to Re^4: Threads and HTTPS Segfault
in thread Threads and HTTPS Segfault
I will have to respectfully disagree with you BrowserUk. Delays when using SSL/TLS with modern hardware are hardly noticeable nowadays. Your network latency would have to be incredibly low in order for the decryption to take longer than fetching the data. The idea I had is that the single processor could be decrypting data while it is waiting for data from the network, by using an event framework. As long as network latency is greater, hopefully much greater, than the amount of time decryption takes, performance should not suffer.
From my own experimenting, a prohibitive delay when using HTTPS comes from the handshake that begins the encrypted connection. The client and server exchange messages back and forth, each subject to network latency! Taking advantage of persistent HTTP 1.1 connections is practically a necessity.
I made a benchmark to check my theory. The script is admittedly funky and limited. The experiment uses EV, AnyEvent, and AnyEvent::HTTP to see if CPU would be a limited factor, based upon the idea that switching between http and https would show noticeable differences. Relearning AnyEvent took me awhile and I wasted alot of time on this today but maybe someone will find it useful for stress-testing or experimenting.
edit: Cleaned up the code to play around with it more.
#!/usr/bin/env perl ## # webstress.pl -- Website stress-testing with AnyEvent. use warnings ('FATAL' => 'all'); use strict; use EV; use AnyEvent; use AnyEvent::HTTP; use Getopt::Long; sub usage { die "Usage:\t$0 [--conmax 5] [--reqs 100] [--desist] <url>\n" unless @ARGV == 3; } sub main { my %opts = ('desist' => 0, 'conmax' => 5, 'reqs' => 100); GetOptions(\%opts, qw/conmax=i reqs=i desist/) or usage(); usage() unless @ARGV == 1; my $url = shift @ARGV; unless ($url =~ m{\Ahttps?://}) { print STDERR "URL must have scheme of http[s].\n"; usage(); } $opts{'url'} = $url; my $start = AnyEvent->time; my ($times, $bytes) = benchgets(\%opts); my $elapsed = AnyEvent->time - $start; printstats({ 'reqs' => $opts{'reqs'}, 'elapsed' => $elapsed, 'times' => $times, 'bytes' => $bytes }); return 0; } sub benchgets { my ($opts) = @_; my ($conmax, $reqcount, $url, $desist) = @{$opts}{qw/conmax reqs url desist/}; my ($i, $bytes, $donecv, @times) = (0, 0, AnyEvent->condvar, ()); $donecv->begin for 1 .. $reqcount; my $clockget; $clockget = sub { my $reqbeg = AnyEvent->time; my $cb = sub { my ($body, $hdrs) = @_; die "HTTP @{$hdrs}{'Status','Reason'}" unless $hdrs->{'Status'} == 200; die 'Content length is zero' if length $body == 0; $bytes += length $body; my $t = AnyEvent->time - $reqbeg; push @times, $t; $donecv->end; # After each response is received, send out another reques +t. $clockget->() unless $i >= $reqcount; }; http_get($url, 'persistent' => !$desist, $cb); ++$i; }; # Start off a self-spawning batch of requests. $clockget->() for 1 .. $conmax; # Continue from here after the last response is received. $donecv->recv; return \@times, $bytes; } sub printstats { my ($opts) = @_; my ($reqcount, $elapsed, $T, $bytes) = @{$opts}{'reqs', 'elapsed', 'times', 'bytes'}; @$T = sort @$T; # makes min, max, median easier # Print simple statistics. printf "%0.3f seconds; %d requests (%0.1f/sec); %d bytes (%d/sec)\ +n", $elapsed, $reqcount, $reqcount / $elapsed, $bytes, $bytes / $e +lapsed; printf "%d min; %d mean; %d med; %d max; %d stdev\n", map { $_ * 1 +_000 } ($T->[0], mean($T), $T->[$#$T/2], $T->[$#$T], stdev($T)); return; } sub sum { my $a; $a += $_ for @{$_[0]}; $a } sub mean { sum($_[0]) / @{$_[0]} } sub stdev { my $a = shift; my $m = mean($a); sqrt mean([ map { ($_ - $m) ** 2 } @$a ]); } exit main(@ARGV);
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^6: Threads and HTTPS Segfault
by BrowserUk (Patriarch) on Aug 23, 2011 at 02:28 UTC | |
by juster (Friar) on Aug 25, 2011 at 00:16 UTC |