in reply to Getting/handling big files w/ perl
Although the computer that you are using has apparently-beefy “specs,” a laptop-class computer does not have nearly the same data-throughput capabilities of a rack-style Macintosh server with a comparable CPU. This machine has an impressive CPU capability, but that really isn’t what is going to most-affect the completion time of a workload of this sort. The biggest impact, I expect, will come from the slowest component ... the network ... and, in this case, with the manner in which that network is now being used. (For instance, is the HTTP data-stream “gzip-ped?”) The performance characteristics of SSD’s can also surprise you.
Beyond that, I would look at bringing additional computers into the mix ... if the local network can support it ... and consider re-defining the problem itself, if that is possible. For example, if the 0.5GB download leads to the 49 files, could the source instead provide (say, 5...) multiple files, each of them pre-compressed, that (say, 5 ...) multiple computers could simultaneously download, decompress locally, and then move to the destination location?
From your description, I doubt that the process could be tremendously improved as it stands: the process is I/O-bound and the I/O capabilities of the machine are lackluster. The process could be realistically (but, perhaps significantly) improved by re-defining it and then, as others suggested, “throwing silicon at” (the re-defined process).
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Getting/handling big files w/ perl
by roboticus (Chancellor) on Nov 17, 2014 at 12:38 UTC | |
|
Re^2: Getting/handling big files w/ perl
by BrowserUk (Patriarch) on Nov 17, 2014 at 11:48 UTC |