tgdaero has asked for the wisdom of the Perl Monks concerning the following question:

Error: bad password or master process exited unexpectedly

Apache/2.2.17 using suexec; perl, v5.10.0 built for sun4-solaris; OpenSSH_5.3p1 OpenPKG-CURRENT, OpenSSL 0.9.8k 25 Mar 2009;
Apache runs as user AP, while my CGI runs via suexec as user U2.

My Module contains the following method and is called from the main CGI script. If the same code is used directly in a .pl on the command line it works.

# Create / Remove ZFS-Directories sub changeZFS { my $type = shift; my $UID = shift; my $IDname = shift; my $ou = shift; my $msg; my ($stdout, $stderr, $exit) = undef; my $ssh = Net::OpenSSH->new("$ZFS_HOST", ctl_dir => $SSH_CTLDIR, u +ser => $SSH_USER, passwd => $SSH_PWD); $ssh->error and return "Cannot Create Homedirectory $ZFS_HOST: + " . $ssh->error(); print LOG "changeZFS(): $ZFS_HOST: $ZFS_CMD $type $UID $IDname $ou +\n"; ($stdout, $stderr, $exit) = $ssh->capture({timeout => 20}, "$ZFS_C +MD $type $UID $IDname $ou:"); $ssh->error and print LOG "Command $ZFS_CMD failed: " . $ssh-> +error; print LOG "OUT=$stdout, ERR=$stderr, EXIT=$exit\n"; print LOG "changeZFS() msg:$stdout\n"; return $stdout; }

I assume the Net::OpenSSH uses user=AP as I needed to change the CTL_DIR away from the HOME of AP. This user is very restricted and has no ~/.ssh directory.
Running from CLI as AP works as well but fails adding known_hosts

Thanks for any recommendations

Replies are listed 'Best First'.
Re: Net::OpenSSH - connection from out a CGI script fails
by salva (Canon) on Jan 20, 2012 at 10:10 UTC
    Probably some permissions issue. Run ssh in verbose mode:
    $ssh = Net::OpenSSH->new($ZFS_HOST, ctl_dir => $SSH_CTLDIR, user => $SSH_USER, passwd => $SSH_PWD, master_opts => '-vvv', master_stderr_fh => \*LOG);

    If that doesn't give you enough information about the cause of the problem, then, you can use truss to see what's happening at the OS level.

    update: and BTW, you are not using the capture method correctly. Do it as follows:

    my ($stdout, $stderr) = $ssh->capture2({timeout => 20}, "$ZFS_CMD $typ +e $UID $IDname $ou:"); $ssh->error and print LOG "Command $ZFS_CMD failed: " . $ssh->error; print LOG "OUT=$stdout, ERR=$stderr, EXIT=$?\n"; print LOG "changeZFS() msg:$stdout\n"; return $stdout;
Re: Net::OpenSSH - connection from out a CGI script fails
by pklausner (Scribe) on Jan 20, 2012 at 10:40 UTC
    Doesn't suexec U2 imply it runs under user U2? Then you should create a writeable HOME_of_U2/.ssh and do not disable detection of changed host key. And rather than saving USER and PASS in your script or whatever random file it reads: why not exchange the keys for U2 with ZFS_HOST for one time? It would work like this:
    su - U2 ssh-keygen -t dsa # creates .ssh/... cat .ssh/id_dsa.pub | ssh ZFS_HOST 'cat>>.ssh/authorized_keys' # accep +t remote key; enter pw once
    (Although either way it sounds scary to muck with remote filesystems from a web UI...)
      There are two things you need:
      1. .ssh/known_hosts in the apache home (and .ssh directory writeable by Apache
      2. .libnet-openssh-perl directory in the apache home (writeable by Apache.

      For example, my Apache user is "apache" with a home directory of /var/www
      I have /var/www/.ssh owned by apache, and /var/www/.libnet-openssh-perl owned by apache
      I ssh to devices using my own account, then copy my known_hosts file to /var/www/.ssh/known_hosts
Re: Net::OpenSSH - connection from out a CGI script fails
by jethro (Monsignor) on Jan 20, 2012 at 10:11 UTC

    Let the script create a file in /tmp then you know as which users the script runs

    If the problem is the .ssh dir, would it be possible to create .ssh/known_hosts as a link to /dev/null? That would keep ssh happy writing to it (hopefully) and reading would always show an empty file

    To simulate the problem. make an write-protected empty .ssh in your home dir and execute the script. Same error message means you have found the problem

    If not, in many cases like this different environment variables are the culprit

      If the problem is the .ssh dir, would it be possible to create .ssh/known_hosts as a link to /dev/null? That would keep ssh happy writing to it (hopefully) and reading would always show an empty file

      That opens the door for man-in-the-middle attacks!

        Sure, but it is not worse than the directory where there is no .ssh and no writing allowed at all (i.e. the situation as it seems to be now)