in reply to Benchmarking Tests

As little as possible and as much as is required.

This really is a "how long is a piece of string" question.

How much your script uses depends first on what it has to do, then on factors like how fast your server is.

Your question is way too loose to get any useful answers.

The questions you should be asking your self:

Does the server ever become overloaded?

How long does the use have to wait for a page (excluding network time)?

How is this affected by the number of concurrent users?

Is anyone complaining?

Replies are listed 'Best First'.
Re: Re: Benchmarking Tests
by kidd (Curate) on Apr 30, 2003 at 23:18 UTC
    Thanks for your reply.

    The thing is that recently my client's sever has been overloaded and they blamed one of my script, that's the reason I checked all of them.

    We are allowed to use 3% of the total CPU usage, so that is like from 3 to 5 CPU's but the server load we are using is between 10 to 15.

    What I wanted to do is check if one of my script where doing the server load of one of the other host, because we are on a shared plan.

    I though that my CPU usage per script was very normal, and we may have up to 20 users at the same time. So I made a little math and thought:

    "If my script is using 0.15 CPU and lets say that 20 users use it at the sames time(it's very unusual), then it would be 3 CPU, that is not close to 10 or to 15".

    I just wanted to be sure that my script aren't using a lot of server load...

      You appear to be mixing up CPU usage and Load. in particular your example of 0.15 CPU * 20 Users = 3 Load doesn't make sense as 0.15 is a percentage and load is absolute

      Load is the number of processes that are waiting for time on the CPU. That would be the 10 or 15

      % CPU is the amount of real time that the process spent actually running eg 0.15.

      So in your example if your script was taking 0.15 CPU over 1 second and you had 20 users running it simultaneously you would be trying to use 20 * 0.15 = 3.00 CPU which means that you will be using 100% of the CPU for 3 seconds. and that would translate into roughly 20 load at 0 seconds 14 load at 1 second 7 load at 2 seconds and 0 load at 3 seconds. Assuming each request ran to completion 1 after the other. Which they probably won't what with the vagaries of I/O and scheduling.

      And of course if you have a multi proc box then the real time taken is % CPU / # of CPU's

      I'm not sure if the %CPU time is the percentage the process uses over a fixed length of time eg 1 second. or if it's the percentage used over the length of time it took to run to completion.

      Anyway I hope that explains the difference between Load and CPU usage ( and I hope it's reasonably accurate). I also hope it helps you find the problem, and of course as other ppl have already said, Do a actual test, *this* is just to help you interpret the numbers a bit better :). Also Load is usually averaged over the last minute 5 minutes and 15 minutes

      Post the code. Your much more likely to get good answers if you do.