in reply to Speed, and my sanity.

As far as speed is concerned, using mod_perl will mostly take care of it. If it's still slow after that, you should do some profiling on it and find the slow part.

For your sanity, you might consider switching over to one of the standard templating tools like Apache::SSI or Template Toolkit. You can read my recent article on perl.com for some background on these. You should also think about breaking up this 400 lines monster into modules. It will make maintenance easier.

Replies are listed 'Best First'.
Re: Speed, and my sanity.
by Dylan (Monk) on Aug 27, 2001 at 00:03 UTC
    I think using ePerl would be a good idea.

    Why? Well, look at this: Manifold.txt
    Ugly, and very long. I have a bet with myself that I can make it shorter, with ePerl. :)

Re: Re: Speed, and my sanity.
by mugwumpjism (Hermit) on Aug 27, 2001 at 04:00 UTC
    If it's still slow after that, you should do some profiling on it and find the slow part.

    And what exactly do you do when you realise the slow part is the system paging, from having to keep a seperate perl process for each and every web server child that most of the time is blocked on network I/O?

      You have at that point many options.
      1. Buy more RAM.
      2. Reconfigure Apache to have fewer processes.
      3. Identify parts of your system that take up the most memory. Then you can:
        1. Rewrite them to be less memory intensive.
        2. Turn them back to CGI.
        3. Use a dedicated serving process (eg FastCGI) for those components.
        4. Use shared memory segments.
      4. Reduce the number of requests that a given child will handle before restarting.
      5. Move to an OS with on demand paging (rather than per process) and give it lots of temp space. (This mainly applies if you are not already using some form of *nix.)
      6. Buy faster hard drives.
      There is no universal solution to poor performance. However the vast majority of the time mod_perl is a massive improvement over straight CGI. Time and money spent optimizing CGIs to run faster is typically penny wise and pound foolish. Hardware is cheap. With the spare hardware from the dotbombs, it is really cheap. Unless you are doing something ridiculous, the odds are very good that hardware is the best solution at hand.
        1. Hey, why didn't I think of that? Nice one.
        2. Mmm, why not; who needs lots of concurrent users anyway? They can wait for the next apache process to finish writing to the network.
          1. Don't need to, I already write my code like that anyway*.
          2. Nice one
          3. Mmm
          4. And do what, use "Storable" to nfreeze and nthaw my data structures to and from the shared memory?
        3. I guess that will stop your server from completely crashing.
        4. How will that stop the per-perl instance data from being swapped out?

        Here's what I see when I start a mod_perl web server. I start it up, it takes up about 3MB; presumably much shared. I load a few CPAN modules and soon enough, the process has grown to 5-8MB. The difference there cannot be shared between processes.

        You could use Apache::Registry with FastCGI to run your perl scripts, as well!

        * - sorry, forgot the <macho>tags :)

      Maybe eliminate the network I/O for your heavy mod_perl procs using a proxy. Perlmonth has a great article on how to properly set this up.

      -Blake

        Right. So get your web server to talk to another web server, because you didn't make your application modular enough to run outside the web server?

        Classic.

A reply falls below the community's threshold of quality. You may see it by logging in.