Re: Speed, and my sanity.
by Dylan (Monk) on Aug 27, 2001 at 00:03 UTC
|
I think using ePerl would be a good idea.
Why? Well, look at this: Manifold.txt
Ugly, and very long. I have a bet with myself that I can make it shorter, with ePerl. :) | [reply] |
Re: Re: Speed, and my sanity.
by mugwumpjism (Hermit) on Aug 27, 2001 at 04:00 UTC
|
If it's still slow after that, you should do some profiling on it and find the slow part.
And what exactly do you do when you realise the slow part is the system paging, from having to keep a seperate perl process for each and every web server child that most of the time is blocked on network I/O?
| [reply] |
|
|
You have at that point many options.
- Buy more RAM.
- Reconfigure Apache to have fewer processes.
- Identify parts of your system that take up the most memory. Then you can:
- Rewrite them to be less memory intensive.
- Turn them back to CGI.
- Use a dedicated serving process (eg FastCGI) for
those components.
- Use shared memory segments.
- Reduce the number of requests that a given child will handle before restarting.
- Move to an OS with on demand paging (rather than per process) and give it lots of temp space. (This mainly applies if you are not already using some form of *nix.)
- Buy faster hard drives.
There is no universal solution to poor performance. However the vast majority of the time mod_perl is a massive improvement over straight CGI. Time and money spent optimizing CGIs to run faster is typically penny wise and pound foolish. Hardware is cheap. With the spare hardware from the dotbombs, it is really cheap. Unless you are doing something ridiculous, the odds are very good that hardware is the best solution at hand.
| [reply] |
|
|
- Hey, why didn't I think of that? Nice one.
- Mmm, why not; who needs lots of concurrent users anyway? They can wait for the next apache process to finish writing to the network.
- Don't need to, I already write my code like that anyway*.
- Nice one
- Mmm
- And do what, use "Storable" to nfreeze and nthaw my data structures to and from the shared memory?
- I guess that will stop your server from completely crashing.
- How will that stop the per-perl instance data from being swapped out?
Here's what I see when I start a mod_perl web server. I start it up, it takes up about 3MB; presumably much shared. I load a few CPAN modules and soon enough, the process has grown to 5-8MB. The difference there cannot be shared between processes.
You could use Apache::Registry with FastCGI to run your perl scripts, as well!
* - sorry, forgot the <macho>tags :)
| [reply] |
|
|
|
|
|
|
|
|
Maybe eliminate the network I/O for your heavy mod_perl
procs using a proxy. Perlmonth has a great article on how to properly set this up.
-Blake
| [reply] |
|
|
| [reply] |
|
|
|
|
|
|
|
| A reply falls below the community's threshold of quality. You may see it by logging in. |