Re (tilly) 3: Speed, and my sanity.
by tilly (Archbishop) on Aug 27, 2001 at 19:42 UTC
|
You have at that point many options.
- Buy more RAM.
- Reconfigure Apache to have fewer processes.
- Identify parts of your system that take up the most memory. Then you can:
- Rewrite them to be less memory intensive.
- Turn them back to CGI.
- Use a dedicated serving process (eg FastCGI) for
those components.
- Use shared memory segments.
- Reduce the number of requests that a given child will handle before restarting.
- Move to an OS with on demand paging (rather than per process) and give it lots of temp space. (This mainly applies if you are not already using some form of *nix.)
- Buy faster hard drives.
There is no universal solution to poor performance. However the vast majority of the time mod_perl is a massive improvement over straight CGI. Time and money spent optimizing CGIs to run faster is typically penny wise and pound foolish. Hardware is cheap. With the spare hardware from the dotbombs, it is really cheap. Unless you are doing something ridiculous, the odds are very good that hardware is the best solution at hand.
| [reply] |
|
|
- Hey, why didn't I think of that? Nice one.
- Mmm, why not; who needs lots of concurrent users anyway? They can wait for the next apache process to finish writing to the network.
- Don't need to, I already write my code like that anyway*.
- Nice one
- Mmm
- And do what, use "Storable" to nfreeze and nthaw my data structures to and from the shared memory?
- I guess that will stop your server from completely crashing.
- How will that stop the per-perl instance data from being swapped out?
Here's what I see when I start a mod_perl web server. I start it up, it takes up about 3MB; presumably much shared. I load a few CPAN modules and soon enough, the process has grown to 5-8MB. The difference there cannot be shared between processes.
You could use Apache::Registry with FastCGI to run your perl scripts, as well!
* - sorry, forgot the <macho>tags :)
| [reply] |
|
|
MeowChow
s aamecha.s a..a\u$&owag.print | [reply] |
|
|
|
|
First of all, every one of the methods that I described for handling the specific problem you mention has been used in the real world. Besides which the specific problem you are ranting about is by no means universal. If you run into it, then many solutions exist. Until you run into it, you shouldn't worry about it.
Secondly you appear to have been so quick to ridicule that you didn't even try to understand some of the suggestions. For instance the fifth suggestion I gave was to use an OS that uses demand paging and give it lots of temp space. You ask how this stops data from being swapped out. Obviously it doesn't, but the point is that with normal usage patterns you can handle common requests (following common code paths) without needing to swap anything back in. Thereby running at full speed even though you are theoretically well past your RAM limit.
Finally your time would have been better spent if you took half the energy you spent ranting about how FastCGI is better than mod_perl and instead answered the question at Wierd issues with STDOUT STDERR and FastCGI. Right now it would appear that a very good reason to use mod_perl rather than FastCGI is that people who run into trouble have better odds of getting help. Case in point, a real question about FastCGI has been sitting there for a day without an answer. Do you think that suaveant is going to be left feeling like FastCGI is the way to go?
| [reply] |
Re: Re: Re: Speed, and my sanity.
by blakem (Monsignor) on Aug 27, 2001 at 14:45 UTC
|
Maybe eliminate the network I/O for your heavy mod_perl
procs using a proxy. Perlmonth has a great article on how to properly set this up.
-Blake | [reply] |
|
|
| [reply] |
|
|
Keeping an interpreter resident in memory is necessary for real speed with mod_perl, just as it is with PHP or Java servers. If you have enough traffic that you need to decouple the network I/O, using a front-end server works fine, or you could use more experimental options like lingerd. The front-end web server passing requests to a dynamic back-end server is an approach used by many tools, from FastCGI to high-end commercial application servers. The two servers happen to communicate over HTTP in this case, but the concept is nearly universal.
| [reply] |
|
|
Does it really make a tremendous difference whether your external process is a webserver or a dedicated application server like you recommend?
If you understand how it all works, the correct answer is, "Not really." They are both the same idea. Same problem. Similar kinds of overhead (though the application server can be a little more stripped down).
Given that, is there really a call for rudeness on your part?
| [reply] |
|
|
|
|