in reply to Re: Re: Speed, and my sanity.
in thread Speed, and my sanity.

You have at that point many options.
  1. Buy more RAM.
  2. Reconfigure Apache to have fewer processes.
  3. Identify parts of your system that take up the most memory. Then you can:
    1. Rewrite them to be less memory intensive.
    2. Turn them back to CGI.
    3. Use a dedicated serving process (eg FastCGI) for those components.
    4. Use shared memory segments.
  4. Reduce the number of requests that a given child will handle before restarting.
  5. Move to an OS with on demand paging (rather than per process) and give it lots of temp space. (This mainly applies if you are not already using some form of *nix.)
  6. Buy faster hard drives.
There is no universal solution to poor performance. However the vast majority of the time mod_perl is a massive improvement over straight CGI. Time and money spent optimizing CGIs to run faster is typically penny wise and pound foolish. Hardware is cheap. With the spare hardware from the dotbombs, it is really cheap. Unless you are doing something ridiculous, the odds are very good that hardware is the best solution at hand.

Replies are listed 'Best First'.
Re: Re (tilly) 3: Speed, and my sanity.
by mugwumpjism (Hermit) on Aug 27, 2001 at 22:27 UTC
    1. Hey, why didn't I think of that? Nice one.
    2. Mmm, why not; who needs lots of concurrent users anyway? They can wait for the next apache process to finish writing to the network.
      1. Don't need to, I already write my code like that anyway*.
      2. Nice one
      3. Mmm
      4. And do what, use "Storable" to nfreeze and nthaw my data structures to and from the shared memory?
    3. I guess that will stop your server from completely crashing.
    4. How will that stop the per-perl instance data from being swapped out?

    Here's what I see when I start a mod_perl web server. I start it up, it takes up about 3MB; presumably much shared. I load a few CPAN modules and soon enough, the process has grown to 5-8MB. The difference there cannot be shared between processes.

    You could use Apache::Registry with FastCGI to run your perl scripts, as well!

    * - sorry, forgot the <macho>tags :)

         Here's what I see when I start a mod_perl web server. I start it up, it takes up about 3MB; presumably much shared. I load a few CPAN modules and soon enough, the process has grown to 5-8MB. The difference there cannot be shared between processes. </code>

      You should configure Apache/mod_perl to load those CPAN modules upon startup -- then they will be shared.

         MeowChow                                   
                     s aamecha.s a..a\u$&owag.print

        OK, I could have done that, but what happens if you have a fair amount of site data, that you use to generate the web pages? If the data changes, all of the children need to reload it and then the copy-on-write happens, the new data cannot be shared automatically. I'd have to implement some form of shared memory system or use a database system to hold the data, and it would have to be accessed, or nthaw'ed out from the shared memory, or read form a file and processed for every hit, or you'd suffer every web server child holding a copy of the data. And then we have another scaling problem.

        I like to keep the web server simple; all it should do is accept a request, check it for basic sanity, pass it on to the web site front door, receive the result (which might be an instruction to send a file on disk) and sit there dishing it out to the user, whilst caching it (if it was generated) so the application server can keep processing, of course. Make the web server more of a reverse proxy than a site management tool. Not only that, it's simple enough to put in a kernel daemon.

        I can then put all of the site configuration within an OO Perl domain, and out of apache's completely non-intuitive interface that only a geek can understand, and build easy tools for people to change them.

      First of all, every one of the methods that I described for handling the specific problem you mention has been used in the real world. Besides which the specific problem you are ranting about is by no means universal. If you run into it, then many solutions exist. Until you run into it, you shouldn't worry about it.

      Secondly you appear to have been so quick to ridicule that you didn't even try to understand some of the suggestions. For instance the fifth suggestion I gave was to use an OS that uses demand paging and give it lots of temp space. You ask how this stops data from being swapped out. Obviously it doesn't, but the point is that with normal usage patterns you can handle common requests (following common code paths) without needing to swap anything back in. Thereby running at full speed even though you are theoretically well past your RAM limit.

      Finally your time would have been better spent if you took half the energy you spent ranting about how FastCGI is better than mod_perl and instead answered the question at Wierd issues with STDOUT STDERR and FastCGI. Right now it would appear that a very good reason to use mod_perl rather than FastCGI is that people who run into trouble have better odds of getting help. Case in point, a real question about FastCGI has been sitting there for a day without an answer. Do you think that suaveant is going to be left feeling like FastCGI is the way to go?