in reply to stat on find output

If you chomped line-ends as kennethk suggested, but things still look funny,

here's another one, typically encountered with Unix servers in a larger compute center:

Do check mount output and every mtab / fstab you can lay your hands on, both locally and on remote NFS servers: grep for a mount option like noatime or similar (note that Linux offers some fake-atime options between accurate atime and noatime).

Note that in such setups, there's a high likelihood of e.g. /sasdata and and /sasdata/it/development/ actually being different (NFS?) filesystems.

{"Millions of files and 5 TB" sounds like a slow and well-known scenario} (comment below)

Consider using _ as "filename", as it allows you to reuse your stat w/o going again all the way to file cache or blockbuffer. Depending on flux, it might be worthwhile to redirect find output to a file. Better yet fold the find into Perl proper, possibly doing a test with File::Find vs readdir. Assuming seek times are dominant and the system's under load, you might nearly half your number of seeks.

Replies are listed 'Best First'.
Re^2: stat on find output
by hitesh_ofdoon (Initiate) on Oct 19, 2009 at 21:52 UTC
    Yes, it is a huge solaris server and I am trying to get last read on millions of files on a 5 TB storage.

      Maybe a more readable / portable / simple approach would be to be more perlish and use File::Util and the last_access | last_modified | etc... functions? Maybe this doesn't apply to your server situation?

      Just a something something...