in reply to Quickly Find Files with Wildcard?
Just some thoughts to this topic. Did no testing or benchmarking. So all of this could be nonsense ;o)
How many entries are read? Maybe it is a speedup if you grep for plain files when reading from the directory (so you have less entries in @a which are checked afterwards). This maybe only makes sense, if there are much more directories than files.
sub find_hash_core { my $dir = './'; opendir my $dirh, $dir or die "$dir: $!\n"; # if you need the current workdir, see Cwd how to retrieve that and +restore it later chdir $dir; my @files = grep { -f $_ } readdir $dirh; for ( @files ) { if ( m/1234yourhashvalue/ ) { print "found hash"; }
You could even check that hash value inside the grep:
my @files = grep { -f $_ && m/1234yourhashvalue/ } readdir $dirh;
Did you try glob() instead of readir? Don't know if there is a big difference between those implementations...
chdir $dir; my @files = glob( "*.1234yourhashvalue" );
Did you try the linux file find command yet?
my @files = qx{ find $dir -maxdepth 1 -type f -name "*.1234yourhashval +ue" };
I think for a more detailed answer, please give more information... Otherwise it's up to you to find a solution...
Update: fixed file/find typo
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Quickly Find Files with Wildcard?
by expresspotato (Beadle) on Mar 23, 2009 at 02:32 UTC |