in reply to Can the special underline filehandle (stat cache) be cleared?

Our filesystem gets too much wear and tear

I call "premature optimisation." I find it extremely unlikely that this actually will save much, if any, speed, and pretty much impossible that consecutive hits to the same directory will actually touch the physical media in any way, shape, or form. Your hard disk has a cache. Your filesystem driver has a cache. Your C library has a cache. I doubt that all of these will be emptied between the time that File::Find calls lstat and the time that your wanted sub calls lstat. If so, you probably have bigger issues than just how hard your perl code is hitting the disk.

If anything, using _ merely avoids the repeated call to the C library's stat or lstat function, so you can save some function call overhead. But when you're hitting 34,000 files, I somehow doubt that CPU time is your limiting factor in your application's speed.

Thus, my suggestion: relax. Don't fret the small stuff ;-)

  • Comment on Re: Can the special underline filehandle (stat cache) be cleared?

Replies are listed 'Best First'.
Re^2: Can the special underline filehandle (stat cache) be cleared?
by ammon (Sexton) on Oct 05, 2006 at 00:45 UTC
    I'm not trying to save speed -- in fact, my benchmarking actually shows that what I'm trying to do is currently slower than just making the additional stat() calls.

    Yes, we have bigger issues than how hard my perl code is hitting the disk, but when those bigger issues consist of already problematic data throughput off our fileservers and around the network, my perl code does have a measurable negative impact (heck... the sys-admins don't even like us doing a simple du on the filesystem in question), particularly since it's intended to be used as a fundamental module in my department.