It 1999 paper certainly made it seem like it was something well worth pursuing. At least for rsync purposes.
But I think the Anonymonk (OP?) that brought the subject up has the wrong end of the stick.
From my reading, the resetting of the dictionary when the rolling checksum rolls over allows rsync to be more efficient by breaking the breaking the compressed file into smaller, separately decompressable chunks.
But unless the mechanism also stored the (uncompressed) offset of each of those chunks, then it wouldn't help gzseek be more efficient. It would still need to decompress every chunk serially to work out the start offset of each chunk. I guess if it stored the offsets on the first run through, it would save a bit of time on subsequent seeks. But even with a full table of chunk offsets, random access would still be horribly slow.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|