in reply to Best programming practice
my $working_dir = $ARGV[0]; # starting directory my $extension = $ARGV[1]; # extension to save dedup($working_dir); exit 0; sub dedup { my $path = shift; my @files = glob("$path/*"); print "Checking [$path] ...\n"; foreach (@files) { dedup("$_") if (-d $_ && -x $_); # recurse into subdire +ctories my ($base, $ext) = m/(.*)\.([^\.]+)$/; my @matches = glob("$base*"); print "\tremoving: $_\n" if scalar @matches > 1 && $ext ne $ex +tension; # unlink $_ if scalar @matches > 1 && $ext ne $exte +nsion; } }
I'm not using File::Find here, but it does work recursively through accessible subdirectories. It only looks at dups within the same directory level, so if you have foo.* in multiple directories they won't be affected.
This is a quick and dirty script, so it could be much better with some argument checking, adding an appropriate way to handle "." dot files and the like; but, it seems to do a fairly good job.
If you can tell me what I might have missed running your code, let me know, I'd love to give it a run and compare the output.
---
echo S 1 [ Y V U | perl -ane 'print reverse map { $_ = chr(ord($_)-1) } @F;'
|
|---|