in reply to Re^4: Threads and HTTPS Segfault
in thread Threads and HTTPS Segfault

I will have to respectfully disagree with you BrowserUk. Delays when using SSL/TLS with modern hardware are hardly noticeable nowadays. Your network latency would have to be incredibly low in order for the decryption to take longer than fetching the data. The idea I had is that the single processor could be decrypting data while it is waiting for data from the network, by using an event framework. As long as network latency is greater, hopefully much greater, than the amount of time decryption takes, performance should not suffer.

From my own experimenting, a prohibitive delay when using HTTPS comes from the handshake that begins the encrypted connection. The client and server exchange messages back and forth, each subject to network latency! Taking advantage of persistent HTTP 1.1 connections is practically a necessity.

I made a benchmark to check my theory. The script is admittedly funky and limited. The experiment uses EV, AnyEvent, and AnyEvent::HTTP to see if CPU would be a limited factor, based upon the idea that switching between http and https would show noticeable differences. Relearning AnyEvent took me awhile and I wasted alot of time on this today but maybe someone will find it useful for stress-testing or experimenting.

edit: Cleaned up the code to play around with it more.

#!/usr/bin/env perl ## # webstress.pl -- Website stress-testing with AnyEvent. use warnings ('FATAL' => 'all'); use strict; use EV; use AnyEvent; use AnyEvent::HTTP; use Getopt::Long; sub usage { die "Usage:\t$0 [--conmax 5] [--reqs 100] [--desist] <url>\n" unless @ARGV == 3; } sub main { my %opts = ('desist' => 0, 'conmax' => 5, 'reqs' => 100); GetOptions(\%opts, qw/conmax=i reqs=i desist/) or usage(); usage() unless @ARGV == 1; my $url = shift @ARGV; unless ($url =~ m{\Ahttps?://}) { print STDERR "URL must have scheme of http[s].\n"; usage(); } $opts{'url'} = $url; my $start = AnyEvent->time; my ($times, $bytes) = benchgets(\%opts); my $elapsed = AnyEvent->time - $start; printstats({ 'reqs' => $opts{'reqs'}, 'elapsed' => $elapsed, 'times' => $times, 'bytes' => $bytes }); return 0; } sub benchgets { my ($opts) = @_; my ($conmax, $reqcount, $url, $desist) = @{$opts}{qw/conmax reqs url desist/}; my ($i, $bytes, $donecv, @times) = (0, 0, AnyEvent->condvar, ()); $donecv->begin for 1 .. $reqcount; my $clockget; $clockget = sub { my $reqbeg = AnyEvent->time; my $cb = sub { my ($body, $hdrs) = @_; die "HTTP @{$hdrs}{'Status','Reason'}" unless $hdrs->{'Status'} == 200; die 'Content length is zero' if length $body == 0; $bytes += length $body; my $t = AnyEvent->time - $reqbeg; push @times, $t; $donecv->end; # After each response is received, send out another reques +t. $clockget->() unless $i >= $reqcount; }; http_get($url, 'persistent' => !$desist, $cb); ++$i; }; # Start off a self-spawning batch of requests. $clockget->() for 1 .. $conmax; # Continue from here after the last response is received. $donecv->recv; return \@times, $bytes; } sub printstats { my ($opts) = @_; my ($reqcount, $elapsed, $T, $bytes) = @{$opts}{'reqs', 'elapsed', 'times', 'bytes'}; @$T = sort @$T; # makes min, max, median easier # Print simple statistics. printf "%0.3f seconds; %d requests (%0.1f/sec); %d bytes (%d/sec)\ +n", $elapsed, $reqcount, $reqcount / $elapsed, $bytes, $bytes / $e +lapsed; printf "%d min; %d mean; %d med; %d max; %d stdev\n", map { $_ * 1 +_000 } ($T->[0], mean($T), $T->[$#$T/2], $T->[$#$T], stdev($T)); return; } sub sum { my $a; $a += $_ for @{$_[0]}; $a } sub mean { sum($_[0]) / @{$_[0]} } sub stdev { my $a = shift; my $m = mean($a); sqrt mean([ map { ($_ - $m) ** 2 } @$a ]); } exit main(@ARGV);

Replies are listed 'Best First'.
Re^6: Threads and HTTPS Segfault
by BrowserUk (Patriarch) on Aug 23, 2011 at 02:28 UTC

    It would be interesting to see the results, but I don't have a handy https site that I can hammer in order to test it.

    Do all/most/any https servers support persistent connections? I didn't think they did.

    I thought I remembered reading that persistent https connections were considered a security risk. And that was why Google's recent release of a patch to short-circuit the handshaking was deemed a prerequisite for the adoption of https on GMail. That without it, AJAX over https was virtually impossible.

    But, this is just stuff I've read (and possibly misremembered), not anything I've actually done myself.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      I'm sorry I could not respond yesterday. HTTP 1.1 connections are persistent by default and HTTP 1.0 has the option of keep-alive to start persistent connections. This should not be a problem as long as the server is fully HTTP 1.1 compatible.

      I never considered persistent AJAX connections! I think AJAX could be problematic if you are sending "XMLHTTPRequests" to a server in response to user events. Even if these connections were persistent they would time out after a lack of events. The techniques of Comet or server push use long lived connections where the server does not respond immediately, which would not be necessary with persistent connections. So maybe it's not implemented at all with persistence in mind?

      I cleaned up my script and I will update my earlier post of source code. Here is some sample output. The min, max, mean, etc, line shows values in milliseconds while the line above is just regular seconds.