in reply to Getting/handling big files w/ perl

Batch processing, sequential tasks with dependencies? Sounds like an excellent candidate for make automation. Makefile recipes specify dependencies and the necessary build steps. Going parallel can be as easy as make -j8.

Probably the foremost design concern is to think of your data as streams. How fast can you stream over the net and to the disk, what is the (aggregate) bandwidth of decompression. Dimension the pipes and assemble accordingly.

There are other tools besides wgetrsync is efficient, flexible and can do on-the-fly compression. Might be applicable to your situation, but we're lacking the details.