in reply to Stress testing a webserver
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Stress testing a webserver
by jmo (Sexton) on Apr 30, 2008 at 06:38 UTC | |
In the case of ab: It takes the first page (or some random one?), decides the size of that page is the correct one and all that differs are error pages. In my case I'm testing a dynamic page where the size is very random. So this means ab falsely reports tons of errors due to size. Also I can't specify requests per second just the amount of concurrent requests. The specific case I'm trying to stress test is a broken java with grails application that always returns code 200, only way of seeing if it's broken is by size of the page thus I need a nice report of the amount of each size (or in the subclassed LWP::UA:Parallel I made intervals). With httperf I can specify the amount of req/s but I can't see when things start to go bad and I can't see the amount of pages with different sizes, or the spread of time it takes to serve a page (which ab can). I realize that my need to see the sizes is very specific to the bad code (which I lack powers to correct) I'm testing but I think the basic question I'm seeking an answer to is very general "How many requests per second can page x handle before it starts taking too long time to reply or it starts to break?" Currently my script outputs this, and if it wasn't for parallel doesn't seems to honour "max parallel" and I can't specify the amount of requests per s I'm pretty satisfied with the data it gathers, just that those two are critical for the tests being of any value to me:
| [reply] [d/l] |
by perrin (Chancellor) on Apr 30, 2008 at 18:46 UTC | |
If I were going to roll my own, I'd skip LWP::Parallel and go with forking. There are some HTTP modules that have good performance, like HTTP::GHTTP and HTTP::MHTTP. Put those together with Parallel::ForkManager and you have a good start. | [reply] |