in reply to RFC - Linux::TCPServer (new module)

I would recommend Net::TCPserver::Linux as well. An important item to document will be what's so cool about yours that a pureperl solution doesn't do. if it's speed, i'd include benchmarks.

My criteria for good software:
  1. Does it work?
  2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
  • Comment on Re: RFC - Linux::TCPServer (new module)

Replies are listed 'Best First'.
Re^2: RFC - Linux::TCPServer (new module)
by ph713 (Pilgrim) on Oct 29, 2005 at 19:30 UTC
    There's some commentary in the .pod on how the code takes advantage of mmap() shared anonymous memory and lockless IPC for efficiency gains, but you're right, some good benchmarking versus, say, Net::Server::PreFork would be nice to have in there. I'll have to write up something to do the testing with.

    Update: It looks like Siege will be good for doing the testing. I'm writing up a test script that will do basic HTTP/1.0 responses to their benchmark and run under Linux::TCPServer or Net::Server::PreFork now, we'll see how it fares.

      FYI, I have some preliminary results, and it looks like I'm doing about 2-3x the connection handling speed of the pure perl competition depending on a lot of little variables. The results in the module distribution will of course have to include more details, and I'll leave the benchmarking script in the module too:

      Linux::TCPServer - 100 connections per child process:

      ** siege 2.64 ** Preparing 3 concurrent users for battle. The server is now under siege.. done. Transactions: 15000 hits Availability: 100.00 % Elapsed time: 6.93 secs Data transferred: 62.96 MB Response time: 0.00 secs Transaction rate: 2164.50 trans/sec Throughput: 9.08 MB/sec Concurrency: 2.72 Successful transactions: 15000 Failed transactions: 0 Longest transaction: 0.44 Shortest transaction: 0.00

      Linux::TCPServer - 1000 connections per child process:

      ** siege 2.64 ** Preparing 3 concurrent users for battle. The server is now under siege.. done. Transactions: 15000 hits Availability: 100.00 % Elapsed time: 7.64 secs Data transferred: 62.96 MB Response time: 0.00 secs Transaction rate: 1963.35 trans/sec Throughput: 8.24 MB/sec Concurrency: 2.82 Successful transactions: 15000 Failed transactions: 0 Longest transaction: 0.71 Shortest transaction: 0.00

      Net::Server::PreFork - 100 connections per child process:

      ** siege 2.64 ** Preparing 3 concurrent users for battle. The server is now under siege.. done. Transactions: 15000 hits Availability: 100.00 % Elapsed time: 19.89 secs Data transferred: 62.96 MB Response time: 0.00 secs Transaction rate: 754.15 trans/sec Throughput: 3.17 MB/sec Concurrency: 2.87 Successful transactions: 15000 Failed transactions: 0 Longest transaction: 0.75 Shortest transaction: 0.00

      Net::Server::PreFork - 1000 connections per child process:

      ** siege 2.64 ** Preparing 3 concurrent users for battle. The server is now under siege.. done. Transactions: 15000 hits Availability: 100.00 % Elapsed time: 14.92 secs Data transferred: 62.96 MB Response time: 0.00 secs Transaction rate: 1005.36 trans/sec Throughput: 4.22 MB/sec Concurrency: 2.61 Successful transactions: 15000 Failed transactions: 0 Longest transaction: 1.70 Shortest transaction: 0.00
      Artificial benchmarking has proved to be a wise path to go down indeed. It has uncovered some issues where I was leaking a little bit (either PerlIO objects or the perl stack in general, hard to tell which), that weren't apparent in my (rather strenuous I thought) real-world testing. An update to 0.14 is coming sometime Sunday that moves some of the leaky XS code regarding converting socket FDs into perl io objects back in perl where at least it works correctly, and a change in the handling of socket closing is pending too, as my original understanding of the whole orderly tcp shutdown issue was wrong (it turns out to be an very application-protocol-specific thing, so I'll leave that to the module users if they need it).