in reply to How to decide the size of block in file transferring?

On windows (and probably all filesystems), I'd strongly suggest that you avoid such arbitrary math which will result in weird blocksizes and instead opt for some multiple of the filesystems inherent read size which is generally 4096 bytes (under NTFS). I've also found that throughput gains tend to tail off rapidly as the blocksize grows, with 64kb usually seeming to give the best read/write performance.

You should also ensure that you read/write the files in binary, (':raw'), even if they are text files as there is a substantial overhead in crlf translations, which are redundant for disk to disk transfers.

The only merit I see in the /10 strategy is that it makes the progress calculations simple. And that seems no good reason at all to suffer the cost of partial block reads and writes.

Also, if this were purely for Windows, then I'd use CopyFileEx which provides a callback for progress monitoring and display, and is likely far more efficient than you could write at the Perl level. But you seem to be looking for portablility?


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."
  • Comment on Re: How to decide the size of block in file transferring?

Replies are listed 'Best First'.
Re^2: How to decide the size of block in file transferring?
by SadEmperor (Novice) on Nov 18, 2008 at 08:04 UTC
    Yes, the script is running under Windows now, but it maybe used under Linux in the future.
    I'll try CopyFileEx, Thanks a lot :)