in reply to Re^3: Threads and HTTPS Segfault
in thread Threads and HTTPS Segfault

Is that true for https connections?

I thought SSL connections imposed a fairly high cpu loaded because of the decryption requirements. Even on a pretty low bandwidth connection, cpu usage can quickly become the limiting factor for throughput. Start trying to decode multiple concurrent streams on the same processor (as with Coro, POE and other event driven architectures), and cpu will definitely become the limit factor to throughput.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^5: Threads and HTTPS Segfault
by juster (Friar) on Aug 23, 2011 at 01:38 UTC

    I will have to respectfully disagree with you BrowserUk. Delays when using SSL/TLS with modern hardware are hardly noticeable nowadays. Your network latency would have to be incredibly low in order for the decryption to take longer than fetching the data. The idea I had is that the single processor could be decrypting data while it is waiting for data from the network, by using an event framework. As long as network latency is greater, hopefully much greater, than the amount of time decryption takes, performance should not suffer.

    From my own experimenting, a prohibitive delay when using HTTPS comes from the handshake that begins the encrypted connection. The client and server exchange messages back and forth, each subject to network latency! Taking advantage of persistent HTTP 1.1 connections is practically a necessity.

    I made a benchmark to check my theory. The script is admittedly funky and limited. The experiment uses EV, AnyEvent, and AnyEvent::HTTP to see if CPU would be a limited factor, based upon the idea that switching between http and https would show noticeable differences. Relearning AnyEvent took me awhile and I wasted alot of time on this today but maybe someone will find it useful for stress-testing or experimenting.

      It would be interesting to see the results, but I don't have a handy https site that I can hammer in order to test it.

      Do all/most/any https servers support persistent connections? I didn't think they did.

      I thought I remembered reading that persistent https connections were considered a security risk. And that was why Google's recent release of a patch to short-circuit the handshaking was deemed a prerequisite for the adoption of https on GMail. That without it, AJAX over https was virtually impossible.

      But, this is just stuff I've read (and possibly misremembered), not anything I've actually done myself.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        I'm sorry I could not respond yesterday. HTTP 1.1 connections are persistent by default and HTTP 1.0 has the option of keep-alive to start persistent connections. This should not be a problem as long as the server is fully HTTP 1.1 compatible.

        I never considered persistent AJAX connections! I think AJAX could be problematic if you are sending "XMLHTTPRequests" to a server in response to user events. Even if these connections were persistent they would time out after a lack of events. The techniques of Comet or server push use long lived connections where the server does not respond immediately, which would not be necessary with persistent connections. So maybe it's not implemented at all with persistence in mind?

        I cleaned up my script and I will update my earlier post of source code. Here is some sample output. The min, max, mean, etc, line shows values in milliseconds while the line above is just regular seconds.