in reply to Re^2: Comparing images
in thread Comparing images
This is a statistical tactic so the file format is not a factor. Assuming binary files where all 256 values are possible.
If you pick a random offset within two files being compared, and compare the byte values at that offset, then the odds that they will have the same value is:
256! / (256-2)! / 256^2 = 0.00390625 or 0.4%
And if you pick two random offsets and compare the bytes drawn from both files at those offsets, then the odds against them both being the same (if the files are different) is the above value squared:
0.00390625^2 = 0.0000152587890625 or 0.0015%
Now, seeking to a random offset is much more expensive than reading 2 bytes instead of one once you are there. So, what are the odds of a word (2 sequential bytes), read at the same (random) offset from two files being the same?
65536! / 65534! / 65536^2 = 0.0000152587890625 or 0.0015%
Ie. The same as the two random offset case above. And by extension, selecting two words at 2 offsets gives us a probability of:
0.0000000002328 or 0.000000023%
Again, if the 2 words are read as a single dword at a single offset, then the odds remain the same.
In other words, the odds of two non-identical files containing the same 32-bit value at the same offset are statistically vanishingly small.
So, when comparing files, if the sizes are the same, there is still no need to read the whole file and perform a checksum or hashing algorithm on them quite yet. By storing a random offset/32-bit value pairing, along with a file's size and checksum/hash, the occasions on which it will be necessary to actually compute the checksum/hash are reduced almost to nil.
The choice of random offset deserves some thought.
Many filetypes have headers which contain control information which is either
There are many other similarly non-diagnostic fields which should be avoided. A simple strategy for avoiding these (in most cases), is to use an offset derived from the filesize (which also reduces the volume of data that needs to be accumulated/stored). Eg. Reading the 32-bit value stored at the halfway point (suitably rounded down to the nearest 4-byte boundary) will avoid most headers in most file formats.
Although this simple (and fast) tactic is not guaranteed to weed out duplicates, the final sanctions are still the calculation of a full checksum or hash from the entire file, or even a full byte-wise comparison, so the risk is negligible. But the tactic serves to eliminate those final, relatively expensive strategies in all but a minuscule number of cases.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Comparing images
by kaif (Friar) on Nov 28, 2006 at 01:00 UTC | |
by BrowserUk (Patriarch) on Nov 28, 2006 at 01:48 UTC | |
by kaif (Friar) on Nov 28, 2006 at 11:17 UTC | |
by BrowserUk (Patriarch) on Nov 28, 2006 at 12:14 UTC |