I've played a bit with this option before. When File::Find enters a new directory, it does a readdir(). The output of that readdir() is what goes into the preprocess routine. What you return is a) filtered version of that or perhaps even b) a sorted version of that. If you take a directory name out of this list, find() will not follow down that path (useful for pruning off a directory branch).
I'm not sure that using the preprocess option will make any significant difference in performance in this case. Depends upon how many files are in the /advanced/ directories.
As a trick, there is a special variable _, (note not $_). When a file test is done, this causes a stat(), in this case using the _ will cause the file test info from the -f test to be re-used. This will make a performance difference - stat() is not a quick operation. There are some "yeah but's concerning various types of links - I forget the details right now, but usually this is not an issue.
Didn't test this, but I think this will work...if I got my unless logic right.
Update: Ooops, looked again it appears the the below should be changed, OP wants to process .log files and follow all directories that aren't "advanced".
sub preprocess { my @to_return; foreach (@_) { #don't follow down advanced directory paths #Do call wanted() on .log files (and any directory not #underneath an "advanced" one push @to_return, $_ if ( ( -f and /\.log$/ ) or !( -d _ and /advanced/) ); } return @to_return; }
In reply to Re^2: File::find preprocess problem
by Marshall
in thread File::find preprocess problem
by nemesdani
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |