in reply to (MeowChow) Re6: Speed, and my sanity.
in thread Speed, and my sanity.
OK, I could have done that, but what happens if you have a fair amount of site data, that you use to generate the web pages? If the data changes, all of the children need to reload it and then the copy-on-write happens, the new data cannot be shared automatically. I'd have to implement some form of shared memory system or use a database system to hold the data, and it would have to be accessed, or nthaw'ed out from the shared memory, or read form a file and processed for every hit, or you'd suffer every web server child holding a copy of the data. And then we have another scaling problem.
I like to keep the web server simple; all it should do is accept a request, check it for basic sanity, pass it on to the web site front door, receive the result (which might be an instruction to send a file on disk) and sit there dishing it out to the user, whilst caching it (if it was generated) so the application server can keep processing, of course. Make the web server more of a reverse proxy than a site management tool. Not only that, it's simple enough to put in a kernel daemon.
I can then put all of the site configuration within an OO Perl domain, and out of apache's completely non-intuitive interface that only a geek can understand, and build easy tools for people to change them.
|
|---|