Rolling checksums are great for detecting subsections that may be similar, but you then need a full 128-bit or better digest to check they actually are. And even that is not a 100% guarantee.
Your first decision will be how big to make your rolling block size? Bigger saves more space, but very quickly reduces the odds of your finding common sections. Smaller increases you odds of hits; but also increases your odds of false positives and increases the number of full digests you need to calculate in order to deal with those false positives.
And then you have the problem of how you store: 'a-bit-that-is-different' 'a-bit-that-is-the-same-as-some-chunk-of-some-other-email' 'a-bit-that-is-different' a-bit-that-is-the-same-...
Where do you store bits that are common to 2 or more emails? And how do you reference it from the places you removed it?
If your reference mechanism is (say) the 128-bit digest of the common section, that means the size of your rolling checksum block will need to be at least twice that for you to achieve any space saving at all; and probably 4 times to be of merit.
And remember, although the Rsync algorithm is designated as O(N); that is for comparing one pair of files or documents. If you are to do a full cross-compare of all your emails one against the other, you are looking at an O(N2) (or O((N-1)*(N/2)) if you're smart about it) process.
In the end; you'd almost certainly get better compression and save gobs of time and cpu by using gzip or similar.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|