in reply to Best programming practice

Just curious, but I couldn't get either of your subroutines working. But, giving it some thought since I saw the post originally, I put together one which works with the semantics you mentioned, leaving unique filenames alone (ones with no dups) and removing duplicates of files with the same prefix (minus extension), saving the one prefix you pass in. Is this along the lines of what you're trying to do?
my $working_dir = $ARGV[0]; # starting directory my $extension = $ARGV[1]; # extension to save dedup($working_dir); exit 0; sub dedup { my $path = shift; my @files = glob("$path/*"); print "Checking [$path] ...\n"; foreach (@files) { dedup("$_") if (-d $_ && -x $_); # recurse into subdire +ctories my ($base, $ext) = m/(.*)\.([^\.]+)$/; my @matches = glob("$base*"); print "\tremoving: $_\n" if scalar @matches > 1 && $ext ne $ex +tension; # unlink $_ if scalar @matches > 1 && $ext ne $exte +nsion; } }

I'm not using File::Find here, but it does work recursively through accessible subdirectories. It only looks at dups within the same directory level, so if you have foo.* in multiple directories they won't be affected.

This is a quick and dirty script, so it could be much better with some argument checking, adding an appropriate way to handle "." dot files and the like; but, it seems to do a fairly good job.

If you can tell me what I might have missed running your code, let me know, I'd love to give it a run and compare the output.

---
echo S 1 [ Y V U | perl -ane 'print reverse map { $_ = chr(ord($_)-1) } @F;'