Zipping is crucial and it really solves the problem. I have already tested it with Net::OpenSSH and it works beautiful with default SSH compression. Gzip algorithm turned out to be fiendishly clever with this report-like data; 26 mb of sample report it was able shrink down to about 640 kb. That's what I call efficiency.
Theoretical maximum size of the data that system could give out is approx 486 mb per second, compressed down to 5% it would only take ~25 mb. 1 Gbit/s link can reliably transfer about 100 mb per second, thus allowing a decent spare capacity. Now, processing all this stuff is completely different question altogether, but this side I can control.
Thanks for the input anyway!
Regards,
Alex.
In reply to Re^8: Unpacking and converting
by dwalin
in thread Unpacking and converting
by dwalin
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |