in reply to List Duplicate Files in a given directory

These are all great Perl solutions, but you can also just do

and let your eye pick out the pairs of MD5 sums that are identical.

As already noted, if you have a slow machine, and/or gigantic files, the MD5 process may take a while. My very simple test looked at about a dozen text files (tab-delimited tables) that were an average of 10K each in size, and that all took 24 msec. On a set of larger files (90M in total), it still only took 1.7 seconds. Sometimes simple is the best -- it depends on your situation.

Alex / talexb / Toronto

Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013.