That is the solution we've used at work in the past: ssh/scp
with passwordless access using public authorized_keys.
The part we don't like about this arrangement is, setting
up and breaking down the ssh pipes takes a longer
than we'd prefer, especially when doing a lot of little
operations in succession.
Ideally, we'd like to implement an RPC library which
builds (and re-builds if it is broken) an ssh pipe, and then
uses it persistently over multiple RPC requests.
Of course, most straightforward ways of doing this
bypass the security provided by ssh.
We looked into the perl RPC modules available about a year
ago or more, and the best we found was a module that used
shared secrets instead of public key encryption, which we
didn't like (and couldn't get to work). I'd think this
was a pretty common problem; I'm
surprised someone hasn't already figured out a more clever
solution that I can come up with, and put it into a module...
Alan | [reply] |
Try controlling a remote shell through the Expect module.
| [reply] |
At the low level, we were going to do something a bit more
like this (in perlish pseudocode):
open X, a bidirectional pipe to "ssh myrpcserver.mydomain.com -c /path
+/to/my_rpc_server_code";
print X "my RPC commands";
my $RPC_results = <X>;
Controlling remote access with authorized keys, you
can restrict a remote
key to using Only One Particular Command, to increase security.
The problem is, you can't send the command different command
line arguments each time. We'd like to Not give access to a
completely functioning shell, and we don't want to create a
new key for each different task we might run. So instead,
we build
the RPC server so it gets serialized subroutine calls over
STDIN. The server would throw out requests it doesn't know
how to do, or doesn't want to let you do, and then execute the
rest, and send back serialized results.
This makes it so we wouldn't need to deal with a
human-optimized interface and Expect.pm, and so we can
code the whole remote library of routines in Perl without
breaking it into a bunch of smaller scripts to be called
from a shell.
Above that low-level RPC stuff, we'd have a layer of abstraction
which would turn the end user interface into something more
straightforward, like:
use My_Remote_Library; # This hides the RPC stuff in there somewhere
My_Remote_Library::Do_Some_Administrative_Task_With({account=>'bob', s
+erver=>'www', action=>'delete_account'});
# This executes Do_Some_Administrative_Task_With on the remote server,
+ over the
# secure ssh pipe, and gives us back the results in some
# useful manner. I put no thought into the calling interface,
# it would probably make much more sense in real life, if
# it existed.
Now the problem is... if we want to use a persistant pipe,
across multiple uses of the library in various scripts, then
we either need to make it persist inside mod_perl (which
will work for our web interfaces but not command-line
remote administrative tools and cron scripts),
or build a daemon on the local machine to handle remote RPC
calls with a persistent ssh pipe. If we were to do that,
then we've just added another layer of calling indirection (slow),
and moved the security concern to the local
machine instead of the remote one. Not really a solution at
all, it would probably be better to live without persistent
pipes.
Anyway, the Real Problem with any of this is, Too Many Fires
to fight to allocate adequate resources to this forward progress.
The RPC library is only one small part of a much larger set
of tools that's needed..
Alan
| [reply] [d/l] [select] |