in reply to Quickly Find Files with Wildcard?

Just some thoughts to this topic. Did no testing or benchmarking. So all of this could be nonsense ;o)

How many entries are read? Maybe it is a speedup if you grep for plain files when reading from the directory (so you have less entries in @a which are checked afterwards). This maybe only makes sense, if there are much more directories than files.

sub find_hash_core { my $dir = './'; opendir my $dirh, $dir or die "$dir: $!\n"; # if you need the current workdir, see Cwd how to retrieve that and +restore it later chdir $dir; my @files = grep { -f $_ } readdir $dirh; for ( @files ) { if ( m/1234yourhashvalue/ ) { print "found hash"; }

You could even check that hash value inside the grep:

my @files = grep { -f $_ && m/1234yourhashvalue/ } readdir $dirh;

Did you try glob() instead of readir? Don't know if there is a big difference between those implementations...

chdir $dir; my @files = glob( "*.1234yourhashvalue" );

Did you try the linux file find command yet?

my @files = qx{ find $dir -maxdepth 1 -type f -name "*.1234yourhashval +ue" };

I think for a more detailed answer, please give more information... Otherwise it's up to you to find a solution...

Update: fixed file/find typo

Replies are listed 'Best First'.
Re^2: Quickly Find Files with Wildcard?
by expresspotato (Beadle) on Mar 23, 2009 at 02:32 UTC
    Thank you for all your suggestions. The speed of responses here is extraordinary! It seems accessing these remote file systems over SSHFS was the main bottle neck. Also exiting on the first find of the hash seems to have helped reduce query times. Now the requests are still made using threads, but to each server directly by calling a specially formatted webpage that will return if that server has the hash key in question. My hats to you Monks!