http://qs1969.pair.com?node_id=134907

Someone else has probably thought of this before me, but I'm posting this since I dreamt it up and produced working code in about 5 minutes. I was yet again amazed at Perl and CPAN!

#!/usr/bin/perl -w use strict; use CGI; use CGI::Carp qw(fatalsToBrowser); use Archive::Tar; my $cgi = CGI->new(); my $tar = Archive::Tar->new('foobar.tar.gz'); my $file = $cgi->path_info(); $file =~ s|^/||; print $cgi->header(), $tar->get_content($file);

Say you have called this script foobar.cgi, and have a tarball called foobar.tar.gz - then you'll get the archived file foo/bar.html out of the archive if you request http://servername/foobar.cgi/foo/bar.html. In fact if your webserver is properly configured you could call it just, say, documents, and the URL would look like http://servername/documents/foo/bar.html, giving your visitors not the least hint that anything unusual is happening, besides the rather long load time.

There's some quick&dirty bits there -

  1. I didn't feel like handling MIME types correctly but if you actually wanted to use this for real, you'd probably want to read the server's mime.types.
  2. I didn't give any extra though on rewriting the path_info(), just unconditionally slice any preceeding slash off of it. This may or may not produce unintended results.
  3. You'll probably want to trap failure to find $file and try $file."index.html" and/or $file."/index.html" in that case. Substitue index.html for as many variations as you like.

Unfortunately, it easily spikes your CPU load to 100% and drains lots of memory for long stretches if a HTML page refers to images stored within the tarball causing multiple concurrent CGIs ungzipping/untarring the archive. I tried performance with Archive::Zip, but didn't get any better results, unfortunately.

So there. Utterly useless for any practical purposes but just dead cool. :-) What do you think?