in reply to aborting File::Find::find

Given that you have a specific target file (hence its inode and the number of links to that inode), then why not just use the normal unix "find" utility:
use strict; my ($dir,$file) = @ARGV; my ($finode,$fnlinks) = (lstat($file))[1,3]; $/ = chr(0); my @hardlinks = `find $dir -inum $finode -print0`; chomp @hardlinks; # get rid of null-byte terminations printf "found %d of %d links for %s (inode %d) in %s:\n", scalar @hardlinks, $fnlinks, $file, $finode, $dir; print join("\n",@hardlinks),"\n";
I haven't tried benchmarking that but based on prior experience, if you happen to be searching over any really large directory trees (thousands of files), I know that this approach will be at least 5 or 6 times faster than any solution involving File::Find. (I have posted at least three benchmarks on PM to prove this.)

It also seems a lot simpler. Since you're looking specifically for hard links (files with identical inodes), the issue of portability to non-unix systems is irrelevant.

The unix "find" command is the right tool for this job (and perl just makes it easier to use "find", which is worthwhile).

(update: simplified the "printf" statement a little; also should clarify that the "5 to 6 times faster" is in terms of wall-clock time to finish a given run.)

(another update: after simplifying the printf, I put the args in the right order so that the output is correct.)

Replies are listed 'Best First'.
Re^2: aborting File::Find::find
by marvell (Pilgrim) on Nov 17, 2006 at 11:30 UTC
    Will Unix find abort when it finds the correct number of links?

    --
    ¤ Steve Marvell

      By itself, it seems that unix find will not keep track of the number of links associated with a given inode, and won't stop as soon as that number of links is found.

      And of course, if the given path to search is not the root directory of a disk volume, it's possible that one or more links to the given target file will be outside the search space, so adding a bunch of code to ditch out early won't help.

      But assuming you don't need to worry about that kind of anomaly, you can tailor the perl script to short-circuit the find task like this:

      #!/usr/bin/perl use strict; use warnings; my ( $path, $file ) = @ARGV; die "Usage: $0 search/path data.file\n" unless ( -d $path and -f $file ); my ( $inode, $nlinks ) = ( stat _ )[1,3]; die "$file has no hard links\n" if $nlinks == 1; my ( $chld, $nfound, @found ); $SIG{HUP} = sub { $nfound++; `kill $chld` if $nfound == $nlinks }; $chld = open( FIND, "-|", "find $path -inum $inode -print0 -exec kill +-HUP $$ \\;" ) or die "find: $!\n"; $/ = chr(0); while ( <FIND> ) { chomp; push @found, $_; } printf( "found %d of %d links for %s in %s\n", scalar @found, $nlinks, $inode, $path ); print join( "\n", @found ), "\n";
      My first attempt involved just checking the size of @found inside the while (<FIND>) loop, but it turns out that the output from the find process will be buffered, and perl will just wait until the process finishes.

      The above script works because the child process sends a HUP signal to the parent each time a file is found (note the double backslash to escape the semi-colon properly). The parent kills the child as soon as the expected count is reached, the child's output buffer gets flushed, and the parent can finish up right away.

      I tested it on a path that would normally take 30 seconds to traverse with unix find. I watched the initial output from the find run, and created some hard links in one of the directories that would be found early in the process. The above code found those links, reported them correctly, and finished in less than 1 second.

      (UPDATE: Added a "die" condition if stat returns a link count of 1 -- no need to run find in this case.)

      (Another update: shuffled the code slightly so that the signal handler gets set up before the child process gets started.)