in reply to Re: Recursive search for duplicate files
in thread Recursive search for duplicate files

If used naively, that doesn't work out well for large files, because they have to be read from disc entirely.

If you care about performance, you might just want to hash the first 5% (or the first 1k or whatever) and see if there are any collisions, and if there are you can still look at the entire file.

  • Comment on Re^2: Recursive search for duplicate files

Replies are listed 'Best First'.
Re^3: Recursive search for duplicate files
by sh1tn (Priest) on Nov 27, 2007 at 14:04 UTC
    I can agree. Another measure, taking in mind the performance, can be the filesize comparison before all.