in reply to Re^5: Enhancing Speed
in thread Enhancing Speed

FYI, tye covers the FreeBSD issues in a fair amount of detail in 846460.

Update: Wow! I just reread that post in its entirety and it sounds like using compression could exasperate the "can't get anything done for X amount of time problem" and/or the need to reap large children.

Elda Taluta; Sarks Sark; Ark Arks

Replies are listed 'Best First'.
Re^7: Enhancing Speed
by ahmad (Hermit) on Aug 18, 2010 at 20:17 UTC

    That's my point, slow clients keep Apache process running until it finish downloading.

    gzipping contents will make page loads faster & will save traffic (a good idea even if the server is donated).

    Beside if it was a really bad idea, why would both google & facebook guys implement it?

      Beside if it was a really bad idea, why would both google & facebook guys implement it?

      It's not a really bad idea in general. It's not a helpful idea for certain sites. You can't blindly apply performance tuning from one site to another and expect it to work. You have to profile what's slow and what's not to find out if you can expect any improvements or if you're only making it worse.

      You haven't profiled anything. Other people have. Listen to them.

      That's my point, slow clients keep Apache process running until it finish downloading.

      Slow clients do not increase CPU load, they tie up child processes. I've heard nothing about requests getting queued due to having too few child processes or about excessive context switches due to having too many child processes.

      Beside if it was a really bad idea, why would both google & facebook guys implement it?

      We've been explaining that different sites have different bottlenecks. Why do you keep ignoring the specifics that apply to PerlMonks and revert to talking about what would help other sites?

        Slow clients do not increase CPU load, they tie up child processes. I've heard nothing about having too few child processes or about excessive context switches due to having too many child processes.

        Slow clients hold the process so you'll have to open a new process to serve someone else which DOES increase the cpu/ram usage as explained before.

        We've been explaining that different sites have different bottlenecks. Why do you keep ignoring the specifics that apply to PerlMonks and revert to talking about what would help other sites?

        I'm not ignoring the specifics that apply to PerlMonks

        If you take the time to read the link provided by Argel in the post above, you would have seen tye talking about having to REAP Apache children to free-up memory which are probably caused by slow clients holding up children for long time which makes it necessary to start a new process.

        Faster page delivery means less process have to run together which will lower CPU load ... that's my point of view.

      Isn't it so that Apache keeps a pool of children available and does not kill off these child processes after each request? If that is so (my experience is mainly with Apache on Windows) then adding the mod_deflate process to each child will only bloat the child even more and stress the server even more. Over the many years I am with Perlmonks now, I cannot remember that anyone has been able to point to the lack of bandwith as a cause of the slowness of the Perlmonks site and I even hazard to guess that the number of "slow" clients will be very small.

      CountZero

      A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James