in reply to IO::Socket::INET performance on Linux

But when I use the Perl client/server model, I only get a maximum of 10Mbits,

What do your client and server look like?

I've found I get better throughput using recv or sysread than using readline or read; and the biggest difference seems to be that the latter cause greater memory churn.

The cores are using less than 3% of capacity

What does the memory usage look like? (As in memory churn rather than overall usage.)


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re: IO::Socket::INET performance on Linux

Replies are listed 'Best First'.
Re^2: IO::Socket::INET performance on Linux
by flexvault (Monsignor) on Nov 30, 2014 at 15:49 UTC

    Hello BrowserUk,

    Just a small follow up to this thread. I got this to work very well, but it doesn't correspond with any of the documentation I've seen. ( Maybe I'm looking in all the wrong places ). I searched the web and found a article that stated that you can't max out a GigE connection unless you use a MTU of 9000 octets ( jumbo frame ) which is very hard to do since all interfaces must support jumbo frames. Since I was using a Read buffer of 87380 and a Send buffer of 16384, that didn't make much sense to me. The article talked about using the default MTU ( 1500 octets ) which couldn't give the full GigE speed, but could get close to 90%.

    Using my original defaults, I was getting only 10Mbps, but when I reduced the Send/Receive packets to 1500, I was able to get approximately 850Mbps. I modified my test case to send the 1st data packet of only 1 octet and receive it back and then loop incrementing by 1 octet on each cycle and test that each send/receive data was equal (Note: This gave approximately 400Mbps in both directions ). This worked, and I let it run. The next time I looked, the test case was hung on a size that was a little more that 128 times the MTU size.

    After this happened several times, I concluded that I had some timing problem or buffer management problem in the Perl or Debian system drivers. I added to the send routine a pause of

    usleep 500; # 1st guess for waiting
    after each 64 writes to the socket and I was able to eliminate the hang.

    I ran the test for 7 days and transferred 156TBytes in each direction without any errors and without any hangs. I used the system monitor for this and includes TCP/IP overhead.

    This may be unique to me ( network/hardware/software ), and I wanted to test on other hardware, but I only have GigE interfaces on a few servers and they are plugged into 10/100 switches.

    Summary: it's working now and I'm on to the next project. Thanks for the help.

    Regards...Ed

    "Well done is better than well said." - Benjamin Franklin

Re^2: IO::Socket::INET performance on Linux
by flexvault (Monsignor) on Nov 18, 2014 at 20:53 UTC

    Hello BrowserUK,

    Sorry for the delay in answering, but your comments sent me on a long journey. I used both 'recv' and 'sysread' without much change in the results. I then compiled a new version of Perl ( Perl-5.14.4 ) and had similar results.

    But the problem may be in the test case I'm using, since when I do a single transfer I can max out the GigE interfaces, but when I test for random sizes it runs at the 10Mpbs rate. I may try it on an AIX system, just in case it's a quirk of the Debian Linux systems. ( It wouldn't be the first time that Debian is different ).

    Thanks for the help!

    Regards...Ed

    "Well done is better than well said." - Benjamin Franklin

      I do a single transfer I can max out the GigE interfaces, but when I test for random sizes it runs at the 10Mpbs rate

      Sorry. Could you explain a little more about the difference(s) between "a single transfer" and "random sizes"?


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        BrowserUk,

          Sorry. Could you explain a little more about the difference(s) between "a single transfer" and "random sizes"?

        I implemented the client routine as a function that I can 'require' at execution. Obviously the server runs all the time, and the client sends a command to the server and waits for a reply. Normally this reply is processed and the client exits. This is a single transfer or transaction. This works and I get near wire speed between the client and server.

        Now, I have a test case that calls the client with different command sizes and tests that the server received the correct command by returned the exact data that was sent and comparing that they are the same. This runs until I stop the test case. This is where I get the 10Mbps. So it looks like the test case has a problem, but the funny thing is this doesn't happen when the client/server are on the same server. Note: I disabled the compare just in case, but it didn't change the 10Mbps.

        I may be hitting some limitation with the Debian socket implementation, so as long as the 'single transfer' is correct, I'm okay. I've added a CRC to verify the data in addition to the size of the transfer. If either are incorrect, I ask for a re-transfer.

        Best Regards...Ed

        "Well done is better than well said." - Benjamin Franklin