TQuid has asked for the wisdom of the Perl Monks concerning the following question:

This humble novice beseeches the great monks of Perl to restore his sanity and keep at bay the hounds that demand "easy" interfaces of him so he may eat:

I have written a small server that executes system commands in a (relatively) safe way. Its main reason for existence is to interact with a CGI that provides the UI that everyone in my company will (hopefully) use. This server has a small dispatch table of commands, and exits nicely when you say "BYE\n".

Unless, that is, you're a CGI program.

My CGI has lovely taint-checking and all sorts of cool doo-dads that I'm inordinately fond of. What sucks is that, after establishing a connection with the server, and parsing for a READY signal, and evidently sending a command and even getting back output, it just sits there eternally. If I kill the server, the CGI then shows the rest of the output. Having the server shutdown() the connection also works, but I'd like to keep the server able to stay interactive instead of processing only one command per session.

Obviously this is a buffering issue, but I'm stumped. I've looked in the Jaguar, the Camel, the Ram, yea have I studied many animal entrails, and yet nothing seems to work the way I would like. I'm now contemplating using syswrite and sysread, but my aesthetics rebel at this--there's got to be a better way.

Without further ado, some relevant code:

my ($new_sock, $pid, $buf); while ($new_sock = $main_sock->accept()) { $new_sock->autoflush(1); # In theory, new sockets are always non-blocking anyway # with IO::Socket::INET, which I am using here. $pid = fork(); die "Cannot fork: $!" unless defined $pid; if ($pid == 0) { # Child unless ($new_sock->peerhost eq $allowed_ip) { shutdown $new_sock, 2; exit 0; } print $new_sock $greeting; while (defined($buf = <$new_sock>)) { process($buf, $new_sock); # shutdown($new_sock, 1); # Works, but shuts down the whole session } exit 0; } # Else 'tis the parent process, which goes back to accept() } close $main_sock; my ($error, $output); sub process { my $line = shift; $line =~ tr/\r\n//d; # fancy chomp return if $line eq ''; # What's that again? my ($command, @args) = split /\s+/, $line; undef $error; undef $output; if ($command =~ /^BYE/) { print $new_sock "OK disconnecting . . . \n"; shutdown $new_sock, 2; exit 0; } unless (exists $dispatch{$command}) { return unless defined $command; # Ignore blank lines print $new_sock "ERROR unknown command \"$command\"\n"; return; } $dispatch{$command}->(@args); if ($error) { # main scope variable set by the dispatch command print $new_sock <<EOF; ERROR $error $output EOF EOF return; } else { print $new_sock <<EOF; OK $output EOF } }

Now, the CGI:

sub execute { my $port = 1200; my $sock = new IO::Socket::INET ( PeerAddr => $host, PeerPort => $port, Proto => 'tcp' ); die "Couldn't connect to $host:$port : $@" unless ($sock); # Saints preserve me from having to use this # my $flags; # fcntl($sock, F_GETFL, $flags) or die "Couldn't get flags for sock +et: $!"; # $flags |= O_NONBLOCK; # fcntl($sock, F_SETFL, $flags) or die "Couldn't set flags for sock +et: $!"; my_header('Processing Command', 'wheat'); printf "<b>Site %s is on %s</b><br>", param('site'), $host; my @args; foreach my $i ( @{ $action{$selection}{args} } ) { push @args, param($i); } my @command = join(' ', ($action{$selection}{command}, @args)); my ($line, @output); print "Waiting for ready signal . . . <BR><PRE>"; until ($line =~ /^READY/) { $line = <$sock>; print $line, "\n"; } print "</PRE>"; # This part prints out fine! But after, the web client hangs. print $sock (@command, "\n"); while ( $line = <$sock> ) { push @output, $line; } print $sock "BYE\n"; close $sock or die "Can't close socket: $!"; print h2("No response from server $host!<BR>") unless @output; print h2('Server responded:'), pre(@output); footer(); # This all prints if I kill the server }

Many thanks to any and all who may shed light on this vexation.

--TQuid

Replies are listed 'Best First'.
Re: Socket programming & CGI hell
by lhoward (Vicar) on Jul 19, 2000 at 22:24 UTC
    Because your CGI program is basically an "execute once" type program, your server needs to behave apropriately. There is no point in your server keeping the connection open because once the client (.cgi program) has completed it will close the socket from its end. If what you want to do is have the .cgi program be able to send multiple requests to the server in one "session" (i.e. one user "submit" from the webpage) you should do something like have the .cgi send a "logoff" message to the server which causes it to tidy things up and flush the connection from its end.

    Beware that the kind of think you're writing is fraught with danger and could cause lots of problems if someone ever hacked the protocol (not hard to do) unless you have authentication and encryption built into the client/server protocol.

      I believe this section does just what you've said:

      if ($command =~ /^BYE/) { print $new_sock "OK disconnecting . . . \n"; shutdown $new_sock, 2; exit 0; }

      Thanks for mentioning security--I should have mentioned, this will be happening inside an SSL-encrypted connection (and doubtless I'll be asking about Net::SSLeay on here soon). The server rejects any connections not coming from a small range currently, but encryption is indeed necessary before I can have anything like real security.

      --TQuid

        I'm less sure of what you're asking for now. Is your CGI sending multiple commands to the server in one session, but you are not seeing all of the returns from the commands until you close the session? If so you may look into adding some socket flushes to the server (and client) to flush the sockets after each "block" is ready to be sent. I generally use IO::Socket which has autoflush built into it by default.
Re: Socket programming & CGI hell
by DrManhattan (Chaplain) on Jul 19, 2000 at 22:58 UTC
    Your CGI script loops as long as the socket is open:
    while ( $line = <$sock> ) { push @output, $line; } print $sock "BYE\n";

    Your server doesn't close the connection until it gets a "BYE" from the client, but the client doesn't send a "BYE" until the server closes the connection. An easy solution would be to add a numeric or some other response from the server that marks the end of a command's output:

    # In the server script sub process { ... print $new_sock <<EOF; OK $output 123 End of output EOF } # In the CGI script sub execute { while ( ($line = <$sock>) !~ /^123/ ) { push @output, $line; } print $sock "BYE\n"; }

    -Matt

Re: Socket programming & CGI hell
by dbrown (Initiate) on Jul 19, 2000 at 22:38 UTC
    Your process() function executes one command and then returns. As far as I can tell, it doesn't even close the socket (unless there was an error). You need to do something like:
    sub process {
        while(@_) {
            my $line = shift @_;
            .
            .
            .
        }
    
        print $new_sock, "Done.\n";
        shutdown $newsock, 0;
    }
    

    That way it will execute every command that you give it and then close the socket properly. Also, any "returns" in your process() function should be changed to "nexts".

    lhoward also makes a good point about security.

     

      Execute one command and return is exactly what process() is supposed to do; sorry if that is not clear. It's not meant to close the socket until it receives a BYE command. What should happen (and what does happen, if you telnet in) is that it processes one line, returns, processes the next, and so on, until it gets a BYE, and then it disconnects, and Bob's your uncle.

      Your solution would work nicely for batch processing, but that isn't what I want to do.

      --TQuid