in reply to Re^6: Unpacking and converting
in thread Unpacking and converting

Yeah. I understood that. That's why I made the same suggestion (minus zipping). Zipping is a good idea, though. It would slow down pre-processing, but it might speed things up overall by reducing the amount of packets to send.

Replies are listed 'Best First'.
Re^8: Unpacking and converting
by dwalin (Monk) on Feb 18, 2011 at 18:56 UTC

    Zipping is crucial and it really solves the problem. I have already tested it with Net::OpenSSH and it works beautiful with default SSH compression. Gzip algorithm turned out to be fiendishly clever with this report-like data; 26 mb of sample report it was able shrink down to about 640 kb. That's what I call efficiency.

    Theoretical maximum size of the data that system could give out is approx 486 mb per second, compressed down to 5% it would only take ~25 mb. 1 Gbit/s link can reliably transfer about 100 mb per second, thus allowing a decent spare capacity. Now, processing all this stuff is completely different question altogether, but this side I can control.

    Thanks for the input anyway!

    Regards,
    Alex.