Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

I have a frame window which displays a html page based on the contents of a data binary file. There are a number of these data files, so I use an index to display the html version of which ever file, in the output frame. So suppose i have five different html pages I can display in my output frame, is it possible to use cgi to generate five html pages that can be sent to the client along with the parent frame. THerefore if i decide to view the fifth file for the second time, the browser will display the locally stored version of this file. At the moment the index is given the url of a cgi file, so each time a page is indexed a new version is downloaded from the server(this is not a html file, but cgi prints instead). I dont consider this atall efficient
It might also be an idea to use a log file at the server which indicates when each file was last updated. If the locally stored file is current then theres no need to download a new version.

Im not at all sure about any of this so if anyone could shed some light on it I would be grateful
THANX Baz
  • Comment on displaying html pages on the fly, efficiently.

Replies are listed 'Best First'.
Re: displaying html pages on the fly, efficiently.
by tachyon (Chancellor) on Sep 01, 2001 at 21:45 UTC

    HTTP is the Hyper Text Transport Protocol. This is the protocol that defines how a client (browser) and web server deal with requests for information. It is a request response protocol. The client makes requests and the server responds. In a nutshell each request and each response has two parts - these are header and content. Header is mandatory, content optional. When a client requests a page it generally only asks for a header. The server responds with a header that includes details on the document including an "Etag" which is designed to allow the client to detect if the document has changed. Think of the Etag as a checksum for the requested file. The client looks at the Etag in the header to see if it has an up to date copy of the document stored locally. If it does it displays this local doc. If it does not have an up to date copy it sends another request to the server for the whole document. The server then send a response with both header and content. The browser then displays this content.

    OK here are the spanners. You can specify how long a document is to remain current for in the HTTP header to prevent browsers caching docs to long. The browsers may or may not respect this! Proxy servers cache documents between you and the actual web server so your request may actually never get as far as you think! Browsers may not cache docs that end in .cgi as these are assumed to be dynamic. Then again they may. Different versions of different browers do different things!

    In your case using a Perl script to generate static HTML web pages from your data makes the most sense. Unless the content needs to change dynamically (in response to each request) you do not need CGI at all.

    For more info use Super Search for text like "browser cache expires header CGI.pm...."

    cheers

    tachyon

    s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print

      Tachyon,

      That second to last paragraph is an option most devs forget about. Ian Kallen has a great piece on Industrial Strength Publishing that is great. When he talks about "baking and frying components," that's pretty close to your recommendation. It's an option I've found to be very useful.

      -derby