Amoe has asked for the wisdom of the Perl Monks concerning the following question:
That produces no output on a dir which I know contains files of duplicate sizes. Not surprising, considering that that grep feels absolutely horrible. But is there a way to do that horrible grep in a better (and working) way? Because I think the code's okay up to there.#!/usr/bin/perl use strict; use warnings; my $dir = shift(); my @images; opendir(DIR, $dir) or die("Couldn't open dir $dir: $!"); foreach my $file_found (readdir(DIR)) { my %image; $image{name} = $file_found; $image{size} = (stat("$dir/$file_found"))[7]; push @images, \%image; } closedir(DIR); my $previous = 'bo mix'; my @duplicates = grep $_ eq $previous && ($_ = %{$_}) && ($_ = $_{size +}) && ($previous = $_), @images; print join ', ', @duplicates;
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Scanning for duplicate files
by demerphq (Chancellor) on Sep 12, 2001 at 23:08 UTC | |
Re: Scanning for duplicate files
by chromatic (Archbishop) on Sep 12, 2001 at 21:46 UTC | |
Re: Scanning for duplicate files
by kjherron (Pilgrim) on Sep 12, 2001 at 21:57 UTC | |
Re: Scanning for duplicate files
by Zaxo (Archbishop) on Sep 12, 2001 at 23:52 UTC | |
Re: Scanning for duplicate files
by hopes (Friar) on Sep 13, 2001 at 08:54 UTC |