| | |
Here's what I see when I start a mod_perl web server. I start it up, it takes up about 3MB; presumably much shared. I load a few CPAN modules and soon enough, the process has grown to 5-8MB. The difference there cannot be shared between processes.
</code> |
You should configure Apache/mod_perl to load those CPAN modules upon startup -- then they will be shared.
MeowChow
s aamecha.s a..a\u$&owag.print | [reply] |
OK, I could have done that, but what happens if you have a fair amount of site data, that you use to generate the web pages? If the data changes, all of the children need to reload it and then the copy-on-write happens, the new data cannot be shared automatically. I'd have to implement some form of shared memory system or use a database system to hold the data, and it would have to be accessed, or nthaw'ed out from the shared memory, or read form a file and processed for every hit, or you'd suffer every web server child holding a copy of the data. And then we have another scaling problem.
I like to keep the web server simple; all it should do is accept a request, check it for basic sanity, pass it on to the web site front door, receive the result (which might be an instruction to send a file on disk) and sit there dishing it out to the user, whilst caching it (if it was generated) so the application server can keep processing, of course. Make the web server more of a reverse proxy than a site management tool. Not only that, it's simple enough to put in a kernel daemon.
I can then put all of the site configuration within an OO Perl domain, and out of apache's completely non-intuitive interface that only a geek can understand, and build easy tools for people to change them.
| [reply] |
First of all, every one of the methods that I described for handling the specific problem you mention has been used in the real world. Besides which the specific problem you are ranting about is by no means universal. If you run into it, then many solutions exist. Until you run into it, you shouldn't worry about it.
Secondly you appear to have been so quick to ridicule that you didn't even try to understand some of the suggestions. For instance the fifth suggestion I gave was to use an OS that uses demand paging and give it lots of temp space. You ask how this stops data from being swapped out. Obviously it doesn't, but the point is that with normal usage patterns you can handle common requests (following common code paths) without needing to swap anything back in. Thereby running at full speed even though you are theoretically well past your RAM limit.
Finally your time would have been better spent if you took half the energy you spent ranting about how FastCGI is better than mod_perl and instead answered the question at Wierd issues with STDOUT STDERR and FastCGI. Right now it would appear that a very good reason to use mod_perl rather than FastCGI is that people who run into trouble have better odds of getting help. Case in point, a real question about FastCGI has been sitting there for a day without an answer. Do you think that suaveant is going to be left feeling like FastCGI is the way to go? | [reply] |