I agree, the best solution is to not get into the situation in the first place. However, the script can't run 100% (bugs, remote server downtime, etc) and the files are always arriving (and the amount is expected to triple over the next few months). Plus the files tend to arrive in 'bursts' and not evenly spread out.
It just took 5 hours to move the 40,000+ files off the box and I believe >75% of the time was disk IO (running slowly via OS and number of files) from copy, rename, and delete. Just looking at the problem at that level I can't help thinking there is a better way to deal with it than what I am currently doing.
In reply to Re^4: File::Copy and file manipulation performance
by Fendaria
in thread File::Copy and file manipulation performance
by Fendaria
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |