in reply to Perl threads to open 200 http connections

I would consider this “stress testing” scenario to be ill-advised and misleading.   If you attempt to launch hundreds of parallel threads, and unless each of them is really doing exactly what you are doing in production, the only thing that you are going to be “measuring” is the poor design of the test.

First of all, you already know how big The Pipe is.   You know how many megabits or gigabits per second it can take.   Ballpark overhead to be in the area of 25% and figure that you can probably move the rest through the pipe as data ... assuming zero “hops.”

Next, you can determine how many simultaneous transfers the computer can handle, by working systematically upward in small increments until you see the times begin to degrade exponentially.   This is the “elbow-shaped curve” that always exists; the so-called “thrash point.”   Again as a rule of thumb, step back 25% from that and call it good.

The next part of your exploration should involve stochastic (statistical) modeling, which may or may not involve Perl.   (There are packages for the open-source analytics system, “R,” which are specifically designed for this.   See http://cran.r-project.org/ and search for “stochastic” or “simulation.”)

(Heh... if you thought Perl was “engaging, addicting and fun” ...)     :-D

You know that the request volume may at times exceed the number of worker-threads that are processing requests.   (An inbound request queue is, or should be, a basic part of the design.)   Therefore, you are interested to know what are the completion times of the requests, given that this time will include both processing time, I/O time, and time spent in the queue(s).

It is most useful to approach this by estabishing goals, then measuring the system’s sustained ability to meet those goals.   For instance, you might stipulate that “95% of all requests must be serviced and returned to the client within 1.0 seconds.”   And you might stipulate that “the standard deviation of request times, which exceed the 1.0 second rule, must not exceed 2.00.”   Then you model the system and see if it passes or fails.   If it consistently fails, then you start looking for bottlenecks.

Finally, always remember that what you are seeking to do here is “a thing that has already been done, countless times before.”   Take the time to thoroughly study prior art, and documented methods, before you start writing Perl (or any other) code.   I would predict with some certainty that you can, in fact, model the behavior of this system without writing any Perl code at all!