Well your question is sort of a "looking for ideas" one. If your ssh tunnels are not passing huge amounts of data, you could try to run a collection of them with IO::Select. IO::Select will jump from filehandle to filehandle and handle them, as needed. But SSH is quite complicated, and I don't know if there may be some glitch in running separate SSH instances under 1 interpreter. Forking them definitely is safer.
Threads would be another possibility, but they use alot of ram too.
Why not just spend $100 for another gig of ram? :-) It will probably be cheaper than all the time to write and test reliable
scripts.
I'm not really a human, but I play one on earth.
flash japh