in reply to Forking server for ssh tunnels

Well your question is sort of a "looking for ideas" one. If your ssh tunnels are not passing huge amounts of data, you could try to run a collection of them with IO::Select. IO::Select will jump from filehandle to filehandle and handle them, as needed. But SSH is quite complicated, and I don't know if there may be some glitch in running separate SSH instances under 1 interpreter. Forking them definitely is safer. Threads would be another possibility, but they use alot of ram too. Why not just spend $100 for another gig of ram? :-) It will probably be cheaper than all the time to write and test reliable scripts.

I'm not really a human, but I play one on earth. flash japh

Replies are listed 'Best First'.
Re^2: Forking server for ssh tunnels
by salva (Canon) on Feb 26, 2006 at 08:52 UTC
    The SSH Perl implementation Net::SSH::Perl has some support for non-blocking operation but not enough to be run inside a select loop.

    I think the best option would be to create the tunnels with IPC::Open2 and to use a unique perl process written around a select loop to control them and listen for new connections.

    ... though it's not clear to me to what kind of "ssh tunnels" the OP refers, if he is talking about using the stdin and stdout of the ssh process to tunnel data (as in tar cf - . |ssh foo tar xf -) or using ssh native support for tunnels (i.e. ssh foo -L1234:host:1234)

      Yes, I think what you're suggesting sounds like what we should have been doing in the first place. I will look along those lines.

      As to the hows, we're doing:

      my $pid = open $f, "-|" or exec @cmd; # Where @cmd contains the path to the ssh binary and arguments for the creation of an ongoing tunnel.

      Still not sure if I'm giving enough information, but you guys have helped put me on a better track, I think.