flexvault has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,

I have used IO::Socket::INET with good performance on the same machine, and now want to use it between 2 Linux servers. Both servers have GigE interfaces and the Ethernet cables are connected to a GigE managed switch. I tested using 'ftp' between both servers and can max out about 97% of the wire speed until the test 190+GB file is transferred in either direction. Great!

But when I use the Perl client/server model, I only get a maximum of 10Mbits, or the same as if I was using an old standard Ethernet set-up. The cores are using less than 3% of capacity, so the servers have plenty more horse-power if needed. Is this normal?

I using Perl 5.14.2 32bit on Debian 6.0.10 AMD 64bit. Both servers are the same hardware/software lookalikes. Has anyone seen this type of behavior?

Regards...Ed

"Well done is better than well said." - Benjamin Franklin

Replies are listed 'Best First'.
Re: IO::Socket::INET performance on Linux
by BrowserUk (Patriarch) on Nov 17, 2014 at 21:34 UTC
    But when I use the Perl client/server model, I only get a maximum of 10Mbits,

    What do your client and server look like?

    I've found I get better throughput using recv or sysread than using readline or read; and the biggest difference seems to be that the latter cause greater memory churn.

    The cores are using less than 3% of capacity

    What does the memory usage look like? (As in memory churn rather than overall usage.)


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      Hello BrowserUk,

      Just a small follow up to this thread. I got this to work very well, but it doesn't correspond with any of the documentation I've seen. ( Maybe I'm looking in all the wrong places ). I searched the web and found a article that stated that you can't max out a GigE connection unless you use a MTU of 9000 octets ( jumbo frame ) which is very hard to do since all interfaces must support jumbo frames. Since I was using a Read buffer of 87380 and a Send buffer of 16384, that didn't make much sense to me. The article talked about using the default MTU ( 1500 octets ) which couldn't give the full GigE speed, but could get close to 90%.

      Using my original defaults, I was getting only 10Mbps, but when I reduced the Send/Receive packets to 1500, I was able to get approximately 850Mbps. I modified my test case to send the 1st data packet of only 1 octet and receive it back and then loop incrementing by 1 octet on each cycle and test that each send/receive data was equal (Note: This gave approximately 400Mbps in both directions ). This worked, and I let it run. The next time I looked, the test case was hung on a size that was a little more that 128 times the MTU size.

      After this happened several times, I concluded that I had some timing problem or buffer management problem in the Perl or Debian system drivers. I added to the send routine a pause of

      usleep 500; # 1st guess for waiting
      after each 64 writes to the socket and I was able to eliminate the hang.

      I ran the test for 7 days and transferred 156TBytes in each direction without any errors and without any hangs. I used the system monitor for this and includes TCP/IP overhead.

      This may be unique to me ( network/hardware/software ), and I wanted to test on other hardware, but I only have GigE interfaces on a few servers and they are plugged into 10/100 switches.

      Summary: it's working now and I'm on to the next project. Thanks for the help.

      Regards...Ed

      "Well done is better than well said." - Benjamin Franklin

      Hello BrowserUK,

      Sorry for the delay in answering, but your comments sent me on a long journey. I used both 'recv' and 'sysread' without much change in the results. I then compiled a new version of Perl ( Perl-5.14.4 ) and had similar results.

      But the problem may be in the test case I'm using, since when I do a single transfer I can max out the GigE interfaces, but when I test for random sizes it runs at the 10Mpbs rate. I may try it on an AIX system, just in case it's a quirk of the Debian Linux systems. ( It wouldn't be the first time that Debian is different ).

      Thanks for the help!

      Regards...Ed

      "Well done is better than well said." - Benjamin Franklin

        I do a single transfer I can max out the GigE interfaces, but when I test for random sizes it runs at the 10Mpbs rate

        Sorry. Could you explain a little more about the difference(s) between "a single transfer" and "random sizes"?


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.