in reply to Testing for existance of subdirectories
How long is a "long" time?
Under all UNIX filesystems that I know of, the special files which hold the directory information don't hold any information on the file (just its name and "inode" number), so you'd expect a call to a function like that to take a few disk io's most of the time.
Under DOS' FAT, the special files which hold the directory information also hold its length, file type, attributes, etc, and are returned by whatever they call FindFirst/FindNext these days. Maybe you still just "mov ah, 4Eh; int 21h".
So, strictly speaking, the call to (-d $dir.$file) shouldn't need to access the disk, because Perl has just got all of the information from the "FindFirst/FindNext" system calls. My guess is that because Perl was written under Unix, it doesn't expect the information then, and if the win32 port isn't smart enough to cache that information in case of a "stat", then the call might be emulated. This will probably involve calling "FindFirst" again, which would scan the directory again looking for that file, which means you'd get a disk io (or perhaps just a cache access) for every non-directory that comes before the first subdirectory in a directory.
About the only way I could see around that problem, if my guess is right, would be to call FindFirst and/or FindNext API call directly and process its output yourself. This might not be as daunting as it sounds, though I'll leave it for someone else to help you with that!
Update: I've just had a thought. Try replacing your loop with:
my @files = readdir (CURRENT); closedir CURRENT; foreach (@files) { return 1 if (!/^\.\.?$/ and -d); } return 0;
There's a chance that might help.
Also, under UNIX you can test for whether or not a directory has subdirectories with if ((stat $dir)[3] > 2), so it's not that inefficient :-).
|
|---|