Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Ssh and qx

by cbeckley (Curate)
on Mar 30, 2017 at 18:04 UTC ( [id://1186526]=CUFP: print w/replies, xml ) Need Help??

It actually took a little digging to find out how to properly handle the output and the various return codes of qx when executing an ssh command.

Why would you actually want to do such a thing to begin with? You wouldn't. Don't do it. Stop! For the love of ...

If, however, you have a machine who's operating system hasn't had a vendor supported upgrade any time this century, you may not have a choice.

I feel your pain. It runs deep, share it with me.

sub ops_do_ssh_qx { my ($cmd) = @_; $cmd->{ssh_cmd_qx} = 'ssh ' . $cmd->{user} . '\@' . $cmd->{host} . + ' \'' . $cmd->{command} . '\'' . ' 2>/dev/null'; $cmd->{output} = qx($cmd->{ssh_cmd_qx}); if ( defined $cmd->{output} ) { $cmd->{cmd_ret_code} = $?; chomp $cmd->{output}; if ( $cmd->{cmd_ret_code} ) { $cmd->{success} = FAILURE; } } else { ($cmd->{ssh_ret_code}, $cmd->{ssh_ret_msg}) = (0 + $!, '' . $!); $cmd->{success} = FAILURE; } return $cmd; }

The hash you pass in looks like this:

my $cmd = { name => 'foo', user => 'foo_user', host => 'foo.bar.com', command => 'do_something_useful_here', success => SUCCESS };

And you invoke it thusly:

my $cmd_status = ops_do_ssh_qx($cmd); if ( $cmd_status->{success} ) { do_something_with $cmd_status->{output}; } else { do_something_with $cmd_status->{cmd_ret_code}, $cmd_status->{ssh_re +t_code}, $cmd_status->{ssh_ret_msg}; }

Unfortunately the values you end up with in

$cmd_status->{cmd_ret_code} $cmd_status->{ssh_ret_code} $cmd_status->{ssh_ret_msg}
are, for both the OS and SSH, implementation dependent, which is just one of the reasons you shouldn't be doing this if you have a choice.

If anybody finds this useful, you have my condolences.

Thanks,
cbeckley

Update: haukex has a great write up regarding alternatives to qx/backticks here Calling External Commands More Safely. My Perl was too old for the ones I tried, but afoken has indicated that piped opens are available even in 5.004.

Replies are listed 'Best First'.
Re: Ssh and qx
by afoken (Chancellor) on Mar 31, 2017 at 05:52 UTC
    sub ops_do_ssh_qx { my ($cmd) = @_; $cmd->{ssh_cmd_qx} = 'ssh ' . $cmd->{user} . '\@' . $cmd->{host} . + ' \'' . $cmd->{command} . '\'' . ' 2>/dev/null'; $cmd->{output} = qx($cmd->{ssh_cmd_qx}); if ( defined $cmd->{output} ) { $cmd->{cmd_ret_code} = $?; chomp $cmd->{output}; if ( $cmd->{cmd_ret_code} ) { $cmd->{success} = FAILURE; } } else { ($cmd->{ssh_ret_code}, $cmd->{ssh_ret_msg}) = (0 + $!, '' . $!); $cmd->{success} = FAILURE; } return $cmd; }

    Well, I can't imagine an OS that has ssh, but no fork and exec (except for Windows). And if you have fork and exec, you can do better than using qx, see "Safe pipe opens" in perlipc.

    Why no qx?

    qx (and the equivalent ``) uses some heuristics to find out if the default shell needs to be invoked. For simple cases, like qx(foo bar baz), the shell is not needed. But when non-alphanumeric characters come into play, the entire string is passed to the default shell. And at that point, you can not win the game. The default shell is /bin/sh, and that's all you know. It may be a symlink to /bin/bash, which currently has four major versions with different behavior. It may be ash or debian's fork, dash. It may be some csh, or something completely different. And of course, all of those possible shells have different parsing and quoting rules. Start at https://www.in-ulm.de/~mascheck/various/ if you want to get an impression of what traps you may find. And don't make me start complaining about command.com and cmd.exe.

    So, you definitively want to avoid the shell. "Safe pipe opens" avoids the shell by avoiding the biggest problem - parsing a string into a program and number of arguments. The "safe pipe opens" receipe uses the list form of exec, completely bypassing all parsing. Of course, that requires that you pass a list to the function, not a string.

    Another problem is the exit status guessing. For openssh, the exit status is defined in one simple sentence:

    ssh exits with the exit status of the remote command or with 255 if an error occurred.

    (Source: http://man.openbsd.org/ssh, linked from https://www.openssh.com/manual.html)

    So, if you see an exit code of less than 255, ssh has returned the exit code of the remote program, any number between 0 and 254. If you see an exit code of 255, ssh may have run into trouble. Or it simply has returned the exit code of the remote command, which may also have been 255. That's not what your code does. Your code expects the remote program to exit with code 0. Several programs don't, like cmp, diff, and grep, to name just three.

    Also, there is more than that. The exit status ($?) is more than the exit code. It also has a flag that indicates a core dump, and it returns the number of the signal - if any - that killed the process.

    $! is something completely different, it is errno from the C library. It does not contain any information about ssh problems, but just tells you what went wrong if something went wrong when attempting to start ssh. So the names ssh_ret_code and ssh_ret_message are at best misleading. Note that $!, like errno, is NOT reset when a libc function was successful. Quite the opposite is true: errno and thus also $! contains garbage (old errors) if a function was successful.

    Last, using qx in scalar context forces the user of your function to split the returned text into lines, wasting memory. qx supports list context, but your function doesn't.


    So, some final words?

    Why would you actually want to do such a thing to begin with? You wouldn't. Don't do it. Stop! For the love of ...

    If, however, you have a machine who's operating system hasn't had a vendor supported upgrade any time this century, you may not have a choice.

    [...]

    If anybody finds this useful, you have my condolences.

    Why did you post that? "Safe pipe opens" works better, and it did so even in the previous century. Don't post bad code!

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

      I admit I never tested piped opens, I was told by somebody who knows Perl far better than I that they were not available in 5.004, and I didn't question it. Thank you for pointing that out.

      Regarding my use of $!, I only access it when the output is undefined, which I thought happened when something goes wrong with starting ssh. Is that not the case?

      Thank you for your corrections.

      Thanks,
      cbeckley

        Regarding my use of $!, I only access it when the output is undefined, which I thought happened when something goes wrong with starting ssh. Is that not the case?

        Yes, you are right, sorry. I initially misread your code (my brain just ignored the defined), but then I corrected that. Or so I thought, I forgot the $! part. The naming is still problematic, because it's not an ssh problem, it's a problem starting ssh.

        I admit I never tested piped opens

        Oh yes you did, implicitly in every `` and in every qx. "Unsafe" pipe opens are the more verbose variant of qx. I think you could replace qx with a function similar to this one:

        sub inefficient_qx { my $cmd=shift; local *PIPE; open PIPE,"$cmd |" or die "Could not open pipe from $cmd"; # ^-- intentionally written using an old-style bareword handle +for ancient perls if (wantarray) { my @tmp=<PIPE>; close PIPE or die "Close pipe failed: $!"; return @tmp; } else { my $tmp=do { local $/=<PIPE> }; close PIPE or die "Close pipe failed: $!"; return $tmp; } }

        The unsafe parts:

        • relying on the shell (or, to be more precise, one of a thousand different shells randomly installed as /bin/sh) to parse $cmd
        • not dropping permissions when running as root

        "Safe pipe opens" drop privileges, that's the ($EUID, $EGID) = ($UID, $GID) part. And, "safe pipe opens" use the list form of exec, avoiding shell issues.

        This was the only way (except for manually messing with pipes) to do it until perl 5.8.0 arrived. Perl 5.8.0 extends the three-argument-form of open (open $handle, $mode, $filename) to accept a list of command and arguments in place of $filename when $mode is either "-|" or "|-" (open $handle, $mode, @list). This way, you could get a moderately safe variant of qx, but unfortunately, if @list has exactly one element, perl starts guesswork with that one element:

        > perl -E 'open my $pipe,"-|","pstree --ascii --arguments --long $$ 1> +&2" or die $!;' perl -E open my $pipe,"-|","pstree --ascii --arguments --long $$ 1>&2" + or die $!; `-sh -c pstree --ascii --arguments --long 22176 1>&2 `-pstree --ascii --arguments --long 22176 >

        If the one-element list would not be treated specially, perl would complain that it could not find an executable with that funny name. This can be seen by incrementing the list size to 2:

        > perl -E 'open my $pipe,"-|","pstree --ascii --arguments --long $$ 1> +&2","dummy" or die $!;' No such file or directory at -e line 1. >

        So, despite the elegant call, three-argument pipe open still messes with the shell, and you should really use the "safe pipe opens" code from perlipc. If your script runs with elevated privileges (e.g. running as root or setuid/setgid), there is no other way to drop privileges.

        Update:

        The special treatment of a single-element list is, of course, not specific to more-than-two-arguments pipe open. system and exec show the same behaviour:

        > perl -E 'system("pstree --ascii --arguments --long $$ 1>&2")==0 or d +ie $!' perl -E system("pstree --ascii --arguments --long $$ 1>&2")==0 or die +$! `-sh -c pstree --ascii --arguments --long 20620 1>&2 `-pstree --ascii --arguments --long 20620 > perl -E 'system("pstree --ascii --arguments --long $$ 1>&2","dummy") +' alex@enterprise pts/0 16:09:53 /home/alex>perl -E 'system("pstree --ascii --arguments --long $$ 1>&2" +,"dummy")==0 or die $!' No such file or directory at -e line 1. > perl -E 'exec("pstree --ascii --arguments --long $$ 1>&2") or die $! +' sh -c pstree --ascii --arguments --long 20657 1>&2 `-pstree --ascii --arguments --long 20657 > perl -E 'exec("pstree --ascii --arguments --long $$ 1>&2","dummy") o +r die $!' No such file or directory at -e line 1. >

        But unlike open, system and exec have a workaround. Specify the real executable as indirect object and the shell magic is gone:

        > perl -E '@list=("pstree --ascii --arguments --long $$ 1>&2"); system + { $list[0] } @list and die $!' No such file or directory at -e line 1. > perl -E '@list=("pstree --ascii --arguments --long $$ 1>&2"); exec { + $list[0] } @list or die $!' No such file or directory at -e line 1. >

        This is documented in exec and shorter also in system. Pipe open lacks this functionality / workaround, and you have to resort to the quite long code from "Safe pipe open" in perlipc.

        Alexander

        --
        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: CUFP [id://1186526]
Approved by AppleFritter
Front-paged by Arunbear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (4)
As of 2024-03-29 01:45 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found