On windows (and probably all filesystems), I'd strongly suggest that you avoid such arbitrary math which will result in weird blocksizes and instead opt for some multiple of the filesystems inherent read size which is generally 4096 bytes (under NTFS). I've also found that throughput gains tend to tail off rapidly as the blocksize grows, with 64kb usually seeming to give the best read/write performance.
You should also ensure that you read/write the files in binary, (':raw'), even if they are text files as there is a substantial overhead in crlf translations, which are redundant for disk to disk transfers.
The only merit I see in the /10 strategy is that it makes the progress calculations simple. And that seems no good reason at all to suffer the cost of partial block reads and writes.
Also, if this were purely for Windows, then I'd use CopyFileEx which provides a callback for progress monitoring and display, and is likely far more efficient than you could write at the Perl level. But you seem to be looking for portablility?
In reply to Re: How to decide the size of block in file transferring?
by BrowserUk
in thread How to decide the size of block in file transferring?
by SadEmperor
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |