in reply to Re^7: Enhancing Speed
in thread Enhancing Speed

That's my point, slow clients keep Apache process running until it finish downloading.

Slow clients do not increase CPU load, they tie up child processes. I've heard nothing about requests getting queued due to having too few child processes or about excessive context switches due to having too many child processes.

Beside if it was a really bad idea, why would both google & facebook guys implement it?

We've been explaining that different sites have different bottlenecks. Why do you keep ignoring the specifics that apply to PerlMonks and revert to talking about what would help other sites?

Replies are listed 'Best First'.
Re^9: Enhancing Speed
by ahmad (Hermit) on Aug 18, 2010 at 22:14 UTC
    Slow clients do not increase CPU load, they tie up child processes. I've heard nothing about having too few child processes or about excessive context switches due to having too many child processes.

    Slow clients hold the process so you'll have to open a new process to serve someone else which DOES increase the cpu/ram usage as explained before.

    We've been explaining that different sites have different bottlenecks. Why do you keep ignoring the specifics that apply to PerlMonks and revert to talking about what would help other sites?

    I'm not ignoring the specifics that apply to PerlMonks

    If you take the time to read the link provided by Argel in the post above, you would have seen tye talking about having to REAP Apache children to free-up memory which are probably caused by slow clients holding up children for long time which makes it necessary to start a new process.

    Faster page delivery means less process have to run together which will lower CPU load ... that's my point of view.

      I believe ikegami and chromatic are both wrong in their replies to this node. If a significant fraction of request-processing time is spent shipping the bytes to the client, then that part being even a little slower can have a dramatic impact on the number of children Apache will want to create in order to keep up with the requests coming in. Which can certainly run your server out of memory (or even run your server out of other resources by DoS'ing the server with a near-fork-bomb of activity trying to create more children that makes Apache fall further behind which motivates creating more children...).

      I used to work for a high-profile web site that used Apache in a back-end layer. A common request in that layer was to an external service that usually averaged about 100ms per response. A minor bog down at this provider might cause their average response time to climb to about 500ms. This was plenty fast for our needs. But it meant that we needed about 5x as many Apache children for a given level of traffic. And it would send a huge bank of back-end servers from "not busy" to "swamped" with no clear problem (requests aren't unacceptably slow, much less timing out).

      Even after we (at my suggestion due to experience debugging performance problems at PerlMonks) set the number of Apache children to a fixed number (to avoid the fork-bomb-like problem), the lack of children could still cause requests to queue up to the point that the layer in front of that layer could start to fall over.

      So being able to ship the bytes faster can reduce memory requirements on the web hosts and be beneficial. And we certainly have recurring problems at PerlMonks with running out of memory on web hosts.

      But, the time required to ship the bytes to clients is only a significant fraction of the request-processing time during periods when the site is performing unusually fast. So the proposed improvement is unlikely to have much positive impact, I expect.

      But I will keep it in mind the next time I try to tune things. If we haven't recently had problems with lack of web server CPU, than I'll probably try to deploy such a change.

      Update: Re-parented as I originally accidentally replied to the parent of the intended node.

      - tye        

        Thinking back of your latest post about perlmonks tuning, and particularly the possibility to migrate to a linux platform, what is the current hardware setting dedicated to the website (web server and database)? And do we have any metrics about the bandwidth consumption?

      Slow clients hold the process so you'll have to open a new process to serve someone else which DOES increase the cpu/ram usage as explained before.

      Apache still used preforked servers last I checked. That means it children aren't created in response to requests. Either way, you can definitely set a limit on the number of processes that can exist at a given time with Apache.

      I'm not ignoring the specifics that apply to PerlMonks

      You just indicated that if it's good for Google, it's good for PerlMonks.

      which are probably caused by slow clients holding up children for long time

      If you have too many children, it's because you set the limits on the number of children too high.

      Faster page delivery means less process have to run together which will lower CPU load ... that's my point of view.

      Sorry, but it's wrong. CPU load is not related to the number of processes. You could have a thousand processes using no CPU, and you can have one process using all available CPU.

      Faster page delivery means fewer active processes, which could lower CPU load and context switches. As I alluded in my first post, the question is can the savings offset the increase in CPU load and context switching incurred by zipping everything.

      If there really was a problem with long lived children, I imagine using an appropriate proxy would be a much better solution than zipping. (It might even be worthwhile for the proxy to zip responses.)

      Reaping Apache httpd children to free memory has nothing to do with slow clients and everything to do with sharable memory pages gradually becoming unshared.