in reply to Removing Duplicate Files

Not a one-liner.

You need to partition all files by size. Then for each size, if there are exactly two files, compare their contents and possibly delete one. If there are more than two files, you have to partition the files of the current size by their fingerprint (CRC, MD5, SHA1, whatever); and for each partition compare-and-possibly-delete-one between all pairs (each partition possibly shrinking as you go, and thus candidate comparisons may be cancelled before they are performed).

This, first of all, is safe. No file is mistakenly deleted because a deletion always follows a full content compare. We don't trust the fingerprint function, but rather use it as an indication that something is supicious. Then, it is reasonably efficient. No comparisons are made if they are bound to fail.

If this were a litte smaller, we *might* have been able to get away with reading a block of each file simultaneously. But as it is we'll run out of file handles first thing :)