I'm not sure how large the cost differences are (CPU decompression vs. file I/O), but there also is Compress::LZF, which claims to be almost as fast a simple memcopy - maybe it provides enough compression to outweigh the disk I/O.
In reply to Re^4: Strategy for managing a very large database with Perl (Video)
by Corion
in thread Strategy for managing a very large database with Perl
by punkish
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |