in reply to Re^3: Strategy for managing a very large database with Perl (Video)
in thread Strategy for managing a very large database with Perl

I'm not sure how large the cost differences are (CPU decompression vs. file I/O), but there also is Compress::LZF, which claims to be almost as fast a simple memcopy - maybe it provides enough compression to outweigh the disk I/O.

  • Comment on Re^4: Strategy for managing a very large database with Perl (Video)

Replies are listed 'Best First'.
Re^5: Strategy for managing a very large database with Perl (Video)
by BrowserUk (Patriarch) on Jun 18, 2010 at 16:22 UTC

    The problem with compression is that it screws up random access. I probably shouldn't even have mentioned it.

    Uncompressed, you can read the 28-bytes associated with any given "pixel" in the twinkling of an eye. If you have to read and decompress--even in memory--the entire file to get at each pixel, it isn't going to help performance. And the OP says he is unconcerned with space.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.