Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

Re: Database driven web content: live or tape?

by vagnerr (Prior)
on Jul 19, 2002 at 19:43 UTC ( [id://183413]=note: print w/replies, xml ) Need Help??


in reply to Database driven web content: live or tape?

You could potentialy have the best of both perl.com has an article about etoys' website. They used reverse proxying allowing them to cache pages as they are generated.

---If it doesn't fit use a bigger hammer
  • Comment on Re: Database driven web content: live or tape?

Replies are listed 'Best First'.
Re: Re: Database driven web content: live or tape?
by Fastolfe (Vicar) on Jul 20, 2002 at 02:16 UTC
    ++

    I would definitely endorse this type of setup for a major project. There's little sense in building your own proprietary caching mechanism when HTTP already has one built into it.

    Create your application such that it builds dynamic pages, and makes use of HTTP headers to identify how long a resource should be cached (if it should be). Then funnel all of your inbound traffic through a fast caching HTTP proxy server.

    I have also seen approaches where the document root of the web server is actually an on-disk cache. A missing file would generate a call to a 404 handler which invokes the CGI or back-end process (maybe a "real" web server--behind another firewall maybe--that generates content) to generate the page (and perhaps caching it in the document root for future use by the web server).

Re: Re: Database driven web content: live or tape?
by PhiRatE (Monk) on Jul 20, 2002 at 14:38 UTC
    I did a similar thing for a previous employer. I wrote the original (and only to my knowledge) reverse failover/balance patch for Squid 2.3, we used that as the reverse engine and carefully constructed our Expires headers for various content. That made an enormous difference to our capabilities, not only did we have a load distribution algorithm which used each system according to its capabilities without any manual ratio setting, but we had utterly transparent failover, in-memory caching of everything using Squids algorithms which are very effective, it was language/web server agnostic (we were using PHP, perl and C in various areas) and a nice centralised place to pick up the logs to boot.

    It was a good day when I just shut down one of the web servers without warning and not a single user connection was lost.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://183413]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (2)
As of 2024-04-20 11:06 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found