HTTP is the Hyper Text Transport Protocol. This is the protocol that defines how a client (browser) and web server deal with requests for information. It is a request response protocol. The client makes requests and the server responds. In a nutshell each request and each response has two parts - these are header and content. Header is mandatory, content optional. When a client requests a page it generally only asks for a header. The server responds with a header that includes details on the document including an "Etag" which is designed to allow the client to detect if the document has changed. Think of the Etag as a checksum for the requested file. The client looks at the Etag in the header to see if it has an up to date copy of the document stored locally. If it does it displays this local doc. If it does not have an up to date copy it sends another request to the server for the whole document. The server then send a response with both header and content. The browser then displays this content.
OK here are the spanners. You can specify how long a document is to remain current for in the HTTP header to prevent browsers caching docs to long. The browsers may or may not respect this! Proxy servers cache documents between you and the actual web server so your request may actually never get as far as you think! Browsers may not cache docs that end in .cgi as these are assumed to be dynamic. Then again they may. Different versions of different browers do different things!
In your case using a Perl script to generate static HTML web pages from your data makes the most sense. Unless the content needs to change dynamically (in response to each request) you do not need CGI at all.
For more info use Super Search for text like "browser cache expires header CGI.pm...."
cheers
tachyon
s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print
|