I'm working on something that uses File::Find to send file lists to another thread (or two) that's using Thread::Queue.
My major requirement is breaking down a 10Tb, 70million file monster filesystem - taking time is less of a problem (but if I can optimise it, so much the better) than keeping track of progress and being able to resume processing. Just a 'find' takes a substantial amount of time on this filesystem (days). I'm considering if File::Find will allow me to give it a 'start location' to resume processing.
In reply to Re^2: Splitting up a filesystem into 'bite sized' chunks
by Preceptor
in thread Splitting up a filesystem into 'bite sized' chunks
by Preceptor
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |