in reply to Re: Verifying data in large number of textfiles
in thread Verifying data in large number of textfiles

is an md5 digest like a fingerprint for each file? that too might be useful...
  • Comment on Re^2: Verifying data in large number of textfiles

Replies are listed 'Best First'.
Re^3: Verifying data in large number of textfiles
by BrowserUk (Patriarch) on Aug 18, 2004 at 03:04 UTC

    Yes. A bit like a checksum, but being 128 bits, it's reasonably safe to assume that if the generated numbers are the same, the data from which they are generated is also.

    Note: Reasonably safe means "Not guarenteed", but for your application it's perfect as you only need to manually compare those files generating the same signature. If they are indeed the same, then you can discard one of them.

    (Incidently, you ever find two substantially different files that generate the same md5, it would be interesting to see them. :)

    The problem with this, as I mentioned, is that even inconsequential differences, like trailing whitespace, will get you different md5s. Hence the suggestion to strip the whitespace before generating the md5s.

    If the data contains numbers, you might want to "normalise" those to some consistant format (using sprintf for example). Likewise, if there is any chance that text may sometimes be identical except for case, you could normalise that to all lower or upper.

    In the end, you get 5000 (big) numbers. Stick them in a hash, checking for their previous existance first. Any duplicates and you have found what your looking for.


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail
    "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon