Your bottleneck may be the network connection. Compress the text file before sending, uncompress it after receiving. You don't need any perl for that, a shell command is sufficient:
>gzip -c < /etc/passwd | ssh remotehost 'gzip -dc > /tmp/passwd'
>ssh remotehost head /tmp/passwd
root:x:0:0::/root:/bin/bash
bin:x:1:1:bin:/bin:/bin/false
daemon:x:2:2:daemon:/sbin:/bin/false
adm:x:3:4:adm:/var/log:/bin/false
lp:x:4:7:lp:/var/spool/lpd:/bin/false
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/:/bin/false
news:x:9:13:news:/usr/lib/news:/bin/false
>
gzip is just one option, bzip2 and xz usually compress a little bit better, but need more CPU power and/or RAM than gzip, and are not available everywhere. Both are drop-in replacements for gzip, they support at least the classic command line arguments of gzip.
Note that text files usually compress very well, unlike other files. Trying to compress an already compressed file, like a ZIP or JPEG file, just wastes CPU cycles.
Another option is to use rsync IF you transfer the same file over and over again, after editing or appending to it. rsync also trades CPU power for bandwidth, and tries quite hard not to transfer file parts that are already present at the remote host.
A third option for editing and re-transmitting the same file is to use diff on the local host and patch on the remote host. Patches are usually much smaller than the original file.
And, of course, if you suspect that modules written in Perl reimplementing standard unix utilities are slow, try using the original utility. The original is usually written in C or C++, and compiled using a highly optimizing compiler. So they have less startup time and less overhead than perl. In other words, try ssh, sftp, ftp.
Alexander
--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
|