in reply to unlinking performance

hi, if I had a cron script delete files every 4 hrs, 6x a day, would i have to be concerned with the script timing out if it was to delete 100,000-200,000 10kb size files? The files will never exceed 10 kb's.

The file sizes should be irrelevant in most filesystems that I know of. But timed out by what? Do you mean overlapping with the next run? I doubt so. Testing on XP with ntfs, which by far I doubt to be the most efficient situation:

C:\temp>mkdir test C:\temp>cd test C:\temp\test>perl -e "for (1..100_000) { open my $f, '>', $_ or die $! + }" C:\temp\test>perl -le "$n=time; unlink 1..100_000; print time-$n" 55

Replies are listed 'Best First'.
Re^2: unlinking performance
by FunkyMonk (Bishop) on Jun 10, 2007 at 10:45 UTC
    Just for interest, on Debian Lenny/Testing usig ext3...

    zippy:~/scripts/tmp$ uname -a Linux zippy 2.6.18-4-amd64 #1 SMP Mon Mar 26 11:36:53 CEST 2007 x86_64 + GNU/Linux zippy:~/scripts/tmp$ time perl -e 'for (1..100_000) { open my $f, ">", + $_ or die $! }' real 9m21.670s user 0m2.480s sys 9m8.294s zippy:~/scripts/tmp$ time perl -le 'unlink 1..100_000' real 0m1.819s user 0m0.064s sys 0m1.428s