I need to copy a 1GB file from United states to a server in china,
This is hardly an issue involving any programming language much less Perl. The issue you are concerned with has more to do with network topology and infrastructure. Going from the US to some server in China involves (last I was aware) your data transferring from the US to Japan (or some other AsiaPac gateway) through one or more gateways after that and thence to China. Don't think there are any US --> China direct paths though that may have changed in the last 10 years.
Every gateway hop ends up adding latency to the transfer and no programming language no matter how clever is going to help with that.
Peter L. Berghold -- Unix Professional
Peter -at- Berghold -dot- Net; AOL IM redcowdawg Yahoo IM: blue_cowdawg
| [reply] |
| [reply] |
| [reply] |
| [reply] |
Everything that blue_cowdawg says is accurate. There are really only 2 ways that you could make things appreciably faster:- compress the file before the copy, or use a copy method that compresses as it copies
- split the file into pieces, and do the transfer in parallel (and reassemble it afterwards). In some cases, a server won't allow a single connection to use more than a pre-set transfer limit, but will allow multiple connections that can sum to more than the single max rate. As blue_cowdawg also points out, this method won't help if some intervening hop ends up being the bottleneck
With no more info than 'a server in China', it's hard to be more specific. However, given the file size, and the likely sub-optimal transfer rate, it would be wise to use something like Net::FTPSSL, where the 'put' command allows you to pass an offset of -1, and it will attempt to resume where it left off.fnord | [reply] |
| [reply] |