Actually, after thinking a bit more about it, I think I understood what happens.
I though that SSL_WANT_READ could happen only when some SSL specific transport data were in the socket buffer (TLS handshakes, TLS session tickets etc.), which doesn't happen very often compared to real payload data (at least when large transfers are performed). But I totally forgot that it also happens each time an incomplete SSL frame is in the socket buffer, which can basically happen for each single TCP packet received, as long as there aren't enough packets received to complete the current SSL frame...
So, there can be at most MAX_SSL_FRAME_SIZE / TCP_FRAME_SIZE failed sysread calls preceding each successful sysread call (if select/sysread is called between each received TCP packet).
In my case, TCP_FRAME_SIZE = MSS (1460) because I'm performing large transfers.
Given that MAX_SSL_FRAME_SIZE is 16384, it means that in the worst case, I can have 16384/1460 = ~11.22 => 11 failed sysread calls (empty) preceding one successful sysread of 16384 bytes.
Worst case read failure rate = 11/12 = ~91.67%, which is indeed not far from the worst case I observed.
Of course it all depends at which rate the application is trying to read/empty the socket buffer. Ideally, in the SSL case, I guess the application should not try to read anything before there is at least one complete SSL frame in the buffer, to avoid wasting I/Os.
I wanted to confirm, so I tried to use the SO_RCVLOWAT (socket receive low water mark) socket option to tell the kernel the select call should not succeed on the SSL socket before there are at least 16384 bytes ready to be read. It only consisted in adding the following 3 lines to my non-blocking SSL client:
use Socket qw'SOL_SOCKET SO_RCVLOWAT'; ... setsockopt($clientSock,SOL_SOCKET,SO_RCVLOWAT,BUFSIZE) or die "Failed to set SO_RCVLOWAT on socket: $!";
And as expected, the results are much much better (very low read failure rate due to SSL_WANT_READ):
$ perl sslclinb.pl Connecting to SSL server (192.168.1.10:1234) Connected, switching to non-blocking mode. Downloading data from server... Transfer speed: 117.54 MB/s Read failure due to SSL_WANT_READ: 0.33% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 117.61 MB/s Read failure due to SSL_WANT_READ: 0.31% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 117.61 MB/s Read failure due to SSL_WANT_READ: 0.55% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 117.61 MB/s Read failure due to SSL_WANT_READ: 0.51% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 117.61 MB/s Read failure due to SSL_WANT_READ: 0.43% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 117.61 MB/s Read failure due to SSL_WANT_READ: 0.33% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 116.13 MB/s Read failure due to SSL_WANT_READ: 0.34% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 117.61 MB/s Read failure due to SSL_WANT_READ: 0.10% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 117.61 MB/s Read failure due to SSL_WANT_READ: 0.40% Read failure due to SSL_WANT_WRITE: 0.00% Transfer speed: 117.61 MB/s Read failure due to SSL_WANT_READ: 0.47% Read failure due to SSL_WANT_WRITE: 0.00% ^C
And as expected also, the CPU usage has decreased a lot and is now similar to that of the implementation based on blocking socket.
Unfortunately the SO_RCVLOWAT socket option is not available in all systems, for example Windows doesn't support it. So it would be nice if there was another solution to avoid overloading the CPU and wasting I/Os when using non-blocking socket with SSL...
In reply to Re: IO::Socket::SSL / Net::SSLeay inefficient in non-blocking mode ?
by Yaribz
in thread IO::Socket::SSL / Net::SSLeay inefficient in non-blocking mode ?
by Yaribz
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |