in reply to Re: gzseek for perl filehandles
in thread gzseek for perl filehandles

That's what,
      --rsyncable   Make rsync-friendly archive
should solve. Although I am open to using a different compressor other than gzip if it has random access. I think .zip must support this for example, but I don't see a compressed filehandle library with random seek support in cpan.

Replies are listed 'Best First'.
Re^3: gzseek for perl filehandles
by Anonymous Monk on Dec 23, 2010 at 15:58 UTC
    File size is 1GB compressed: It's not words but phrases that I want to match against.
Re^3: gzseek for perl filehandles
by BrowserUk (Patriarch) on Dec 23, 2010 at 21:02 UTC

    Googling site:zlib.net rsyncable turns up 0 hits?

        It 1999 paper certainly made it seem like it was something well worth pursuing. At least for rsync purposes.

        But I think the Anonymonk (OP?) that brought the subject up has the wrong end of the stick.

        From my reading, the resetting of the dictionary when the rolling checksum rolls over allows rsync to be more efficient by breaking the breaking the compressed file into smaller, separately decompressable chunks.

        But unless the mechanism also stored the (uncompressed) offset of each of those chunks, then it wouldn't help gzseek be more efficient. It would still need to decompress every chunk serially to work out the start offset of each chunk. I guess if it stored the offsets on the first run through, it would save a bit of time on subsequent seeks. But even with a full table of chunk offsets, random access would still be horribly slow.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.