in reply to Myth busted: Shell isn't always faster than Perl
zentara, try this one. I wrote this many years ago to clean up 100's of MB of source code (meaning 100's of 1000's of files) and it seems pretty fast. Way faster than rm -rf, for example. However, my goal wasn't to remove just the files, but the whole tree. I'll comment out the part that removes directories just to make it do what yours does. Granted ... this is a bit more complex. But it can't easily be duplicated in shell.
I'll also add that the difference in speed between .4s and 3s is quite negligible when compared to the amount of time it takes to remember and write them. This example above is ludicrously expensive to write, but it is something I do enough that I call it "RD" (yes, upper-case - it's too dangerous to get a short lower-case name) and put it in /usr/local/bin on all machines, all platforms, that I have access to (primarily as a symlink to a shared NFS partition). We really do use it that much ;-)use strict; use warnings; $|=1; foreach my $d (@ARGV) { remove_dir($d); rmdir $d; } print "\nDone.\n"; sub remove_dir { my $d = shift; if ( -f $d or -l $d ) { unlink $d; return; } # must be a directory? my (@sfiles, @sdirs); local *DIR; opendir(DIR, $d) || do { print "Can't open $d: $!\n"; return }; foreach (readdir(DIR)) { next if ($_ eq '.'); next if ($_ eq '..'); my $sd = "$d/$_"; if ( -l $sd ) { push(@sfiles, $sd);} elsif ( -d $sd ) { push(@sdirs, $sd); } else { push(@sfiles, $sd); } } closedir(DIR); print "."; # process subdirectories via fork my $count; foreach my $sd (@sdirs) { my $pid; if ($pid = fork()) { # parent ++$count; } elsif (defined $pid) { # child remove_dir($sd); exit; } else { # failure - try again in a bit sleep 5; redo; } while ($count > 2) { wait(); $count--; } } while (wait() != -1) {} #foreach (@sdirs) { # rmdir $_ || do { # warn "$0: Unable to remove directory $_: $!\n"; # }; #} my @cannot = grep {!unlink($_)} @sfiles; if (@cannot) { warn "$0: cannot unlink @cannot\n"; } }
|
|---|