the code's already been posted, and the other solutions are obviously better, but i wanted to do more useless one-lining...
#!/usr/local/bin/perl -w
use strict;
use File::Find;
find { wanted => sub {print if /\.pm$/}, no_chdir=>1} $dir foreach my
+$dir (@INC);
hmmm, there are probably subdirectories in @INC that we've already searched since it recursively goes through the directories. Which would be faster: a search through all the directories using find or a prune using a hash to not re-search directories we've been through? The following is untested (ok, so was the above code), but it's an example...
# same guidelines as above, use strict and that ilk
my %been_at;
sub wanted {
return if $been_at{$dir};
$been_at{$dir} = 1;
print if /\.pm$/;
}
find { wanted => \&wanted, no_chdir => 1 } $dir foreach my $dir (@INC)
+;
So the question i now have is would it be faster to make a hash or recurse possibly the same directories? i don't think this example is a significantly large enough search space to make much of a difference, but if it were would the latter method be faster? i think so but maybe someone can point out something i'm missing...
Also while i'm at it, is no_chdir a hit to performance when recursing directories? Is it better to allow it to change directories? Please comment, i'm still new to File::Find.
jynx |