In short, there is nothing faster than RAM, so you're best off writing a small C http server that loads all HTML pages into RAM and serves them from there.
If you don't want to write a small HTTP server yourself, you can use Apache and a ramdisk to serve the files from.
If that still is not possible, maybe because not enough RAM is available on your machine (which is unlikely as even x86 architecture can easily access 2GB of RAM for storage), you can leave the caching on the file level to the OS and simply serve plain files.
MySQL starts to get a foot in the door only here possibly, as even MySQL has to do exactly the same things the OS has to do for serving pages. Dynamically creating a page will almost always be slower than piping the data from RAM to the network card and slower than piping the data from disk as well.
If you think that you need to recreate data more dynamically than nightly in a cron job, you can consider Apache and an ErrorHandler directive to create "missing", that is, uncached pages and weed out "old" pages with find or File::Find every hour.
A fully dynamic database driven solution will most likely be the slowest solution possible, as it has the drawback of needing to go through the DB and the filesystem on every page served.
Of course, until we know the exact usage patterns and possibly the page sequences, all of this has no meaning. You need to benchmark all solutions to see whether your actual access patterns favour one of the solutions over another.
Personally, I like serving static HTML, as it has the fewest security risks and backups, failover and bringing online a new version of the site are all easily done with the standard shell toolset. Site updates can be made atomic by accessing the document root via a symlink, so a site update means simply moving the symlink.
In reply to Re: quickest way to access cached data?
by Corion
in thread quickest way to access cached data?
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |