Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

I'm trying to include several large (1 to 2 meg) data files into a script I'm using. The text files will act as a base template.
Would it make sense to write a independent module that simply returns the data file, and if so; is it possible to compress the data file inside the module and uncompress it before using it?

Am I going down the correct path on this or is there a better solution?
Thanks

Replies are listed 'Best First'.
Re: File Compression???
by marto (Cardinal) on Jun 28, 2005 at 15:54 UTC
    Hi,

    The module Archive::Any may be worth looking at.
    However, I am not sure that this is something you want to be doing.
    Depending on what your script does, and how often you want to read the template files this may be an inefficient approach to take. Could you describe what your script is doing (or better yet, post the code) and how often it runs (is it a script you intend to run at time intervals) and so on.

    Hope this helps,

    Martin
Re: File Compression???
by tlm (Prior) on Jun 28, 2005 at 16:43 UTC

    I'm not sure of the wisdom of doing this, but, just for laffs... Two files letssee.pl and Mongo.pm:


    ### letssee.pl use strict; use warnings; use Mongo; my $buffer = Mongo::payload(); print $buffer; __END__

    ### Mongo.pm use strict; use warnings; package Mongo; use PerlIO::gzip; 1; # appease require sub payload { binmode( DATA, ':gzip' ); local $/ = \4096; my $ret; $ret .= $_ while <DATA>; return $ret; } __DATA__

    Now, append compressed data to Mongo.pm, so that it can be accessed through its DATA handle.

    % echo 'hello world'| gzip -c - >> Mongo.pm
    Lastly, run letssee.pl:
    % perl letssee.pl hello world
    The Mongo::payload method reads the compressed payload from the DATA handle, uncompressing it on the fly. See PerlIO and PerlIO::gzip for more details on PerlIO layers.

      That works great!...I ended up using Tie::gzip instead on Windows.
      Thanks Again!