Yes. A bit like a checksum, but being 128 bits, it's reasonably safe to assume that if the generated numbers are the same, the data from which they are generated is also.
Note: Reasonably safe means "Not guarenteed", but for your application it's perfect as you only need to manually compare those files generating the same signature. If they are indeed the same, then you can discard one of them.
(Incidently, you ever find two substantially different files that generate the same md5, it would be interesting to see them. :)
The problem with this, as I mentioned, is that even inconsequential differences, like trailing whitespace, will get you different md5s. Hence the suggestion to strip the whitespace before generating the md5s.
If the data contains numbers, you might want to "normalise" those to some consistant format (using sprintf for example). Likewise, if there is any chance that text may sometimes be identical except for case, you could normalise that to all lower or upper.
In the end, you get 5000 (big) numbers. Stick them in a hash, checking for their previous existance first. Any duplicates and you have found what your looking for.
In reply to Re^3: Verifying data in large number of textfiles
by BrowserUk
in thread Verifying data in large number of textfiles
by dchandler
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |