in reply to opendir slower than ls on large dirs?

It's the grep using a regex that's slowing you down. You would expect that, for a large directory, and assuming what you're looking for isn't cached, the bottleneck is going to be the disk overhead, and whether you are going to call ls or not is going to be minimal. However, in your readdir solutions you are going to do work in Perl space for each file returned, due to your grep. grep $_ eq $file would be better than grep /$file/, but you're still doing Perl work for each file returned.

If all you want to know whether a certain file exists, use -e. That's going to be the fastest, and will tell you exactly that. On Unix operating systems, -e on a large directory will still be slower than doing it on a small directory, and that's due to Unix decision to store filenames unsorted in a directory (storing file sorted makes operations in a large directory faster, but then those operations would be slower in a small directory). Now I was a bad boy and generalized Unix - which is not a smart thing to do, because Unix means a gazillion ways of doing the same thing, every way slightly different than the other, so no doubt there are a few file systems out there who do store files differently. I think Windows directories store files unsorted as well, but I could be mistaken.

Lesson to be learned: do not create large directories! (Directories are like drawers: the more stuff you have in it, the harder it is to find something).