in reply to aborting File::Find::find
I haven't tried benchmarking that but based on prior experience, if you happen to be searching over any really large directory trees (thousands of files), I know that this approach will be at least 5 or 6 times faster than any solution involving File::Find. (I have posted at least three benchmarks on PM to prove this.)use strict; my ($dir,$file) = @ARGV; my ($finode,$fnlinks) = (lstat($file))[1,3]; $/ = chr(0); my @hardlinks = `find $dir -inum $finode -print0`; chomp @hardlinks; # get rid of null-byte terminations printf "found %d of %d links for %s (inode %d) in %s:\n", scalar @hardlinks, $fnlinks, $file, $finode, $dir; print join("\n",@hardlinks),"\n";
It also seems a lot simpler. Since you're looking specifically for hard links (files with identical inodes), the issue of portability to non-unix systems is irrelevant.
The unix "find" command is the right tool for this job (and perl just makes it easier to use "find", which is worthwhile).
(update: simplified the "printf" statement a little; also should clarify that the "5 to 6 times faster" is in terms of wall-clock time to finish a given run.)
(another update: after simplifying the printf, I put the args in the right order so that the output is correct.)
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: aborting File::Find::find
by marvell (Pilgrim) on Nov 17, 2006 at 11:30 UTC | |
by graff (Chancellor) on Nov 18, 2006 at 05:30 UTC |