in reply to Identical Files to Symbolic Links

while(@fnames) { my $f = shift @fnames; for my $f2 (@fnames) { last if($size{$f} != $size{$f2});

That's O(n2)! You'd get a significant boost by making a set of lists of files with the same size:

my %filesets; for ( @fnames ) { push @{ $filesets{ -s $_ } }, $_; } # filter out any lists that have only one element: for ( keys %filesets ) { @{$filesets{$_}} <= 1 and delete $filesets{$_}; }

You could use checksums to get each list down to a set of "highly likely" candidate duplicates:

my %filesets; for ( @fnames ) { my $size = -s $_; my $csum = `sum "$_"`; push @{ $filesets{$size.$csum} }, $_; }

But you'd probably still want to do actual file comparisons (`cmp`) to ensure actual duplicates.

We're building the house of the future together.

Replies are listed 'Best First'.
Re^2: Identical Files to Symbolic Links
by Aristotle (Chancellor) on Nov 10, 2005 at 02:57 UTC

    sum has to read the entire file anyway, so there’s no gain from checksumming them to decide whether you want to compare them.

    The right method to do this would be to put all the files in one set to start out with, read them byte-for-byte, and whenever files disagree, split the set into one set for each byte value encountered. Whenever a set consists of just one file, you can drop it. When you get to the end of any of the files and still have sets with more than one file in them, each of the sets is a group of identical files.

    Of course, this is unworkable when you have more files than available handles. But in that case, all solutions I can think of (I wrote up and deleted three so far) are ugly as sin. Checksumming will usually get you out of the bind, but in edge cases with a huge number of identical files, that approach is really painful. Hmm.

    Makeshifts last the longest.

      Oh, but there is! It's much better to read each file once (O(n)) rather than compare all the pairs of files (O(n2)).

      We're building the house of the future together.

        Who talked about comparing all the pairs individually? I outlined a solution that would require reading all files exactly once and only once (instead of at least twice, as with any checksum approach).

        Makeshifts last the longest.

      sum has to read the entire file anyway, so there’s no gain from checksumming them to decide whether you want to compare them.

      Indeed in my own duplicate searching script (currently only deletes duplicates, but I plan to make it more flexible one day) I make clusters of files based on size since that is much lighter to take, and I calculate checksums within clusters to decide whether the files are identical or not. This is not 100% sure, as is well known, but is enough for me. If I ever decide to make it into a serious thing, I'd add an option for full comparison...

      Funny: it seems quite about everybody rolled his or her own version of this thing...