If you want to do something more sophisticated, you may find it easier to build a list of potential duplicates and then postprocess each list in a second loop. The following code will build a hash of arrays, each array containing the filenames that had the same size. It then calls a checking function on each list containing more than one file name:... my %image; opendir(DIR, $dir) or die("Couldn't open dir $dir: $!"); foreach my $file (readdir(DIR)) { my $size = -s "$dir/file_found"; # Simpler size code if (exists($image{$size})) { handle_duplicate($image{$size}, $file); } else { $image{$size} = $file; } }
... foreach my $file (readdir(DIR)) { my $size = -s "$dir/$file"; push @{$image{$size}}, $file; } closedir(DIR); foreach my $list (values %image) { handle_duplicates(@{$list}) if (@{$list} > 1); }
In reply to Re: Scanning for duplicate files
by kjherron
in thread Scanning for duplicate files
by Amoe
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |