Ovid has asked for the wisdom of the Perl Monks concerning the following question:

I'm working on the second lesson of my online CGI course and ran into a bit of a stumper.

I'm demonstrating why most alternatives to CGI.pm fail on file uploads and I use the following script to show the contents of <STDIN> following the browser POST:

#!c:/perl/bin/perl.exe -wT use strict; my $buffer; read (STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); print "Content-type: text/plain\n\n"; print $buffer;
Now generally, if the Content-type is test/html, this represents a security hole as some user could enter a dangerous SSI (e.g. <!--#exec cmd="/bin/rm -fr"-->). If the Web server is configured to allow SSI interpretation in CGI scripts, you've just had a bunch of files wiped out with that. However, if the Content-type is text/plain, do the servers ignore SSI? If it is in any way possible for such an SSI to be entered in such a script, I would like to include that in one of my "Security Checkpoints."

Cheers,
Ovid

Join the Perlmonks Setiathome Group or just go the the link and check out our stats.

Replies are listed 'Best First'.
Re: Security question
by merlyn (Sage) on Oct 09, 2000 at 07:51 UTC
    I think you have a false presupposition. The output of a CGI program is not further rescanned under Apache. (Dunno other servers, but don't think so there either.)

    The problem is not because a CGI script is displaying SSI. The problem is when a CGI script has made an HTML file that then is processed, with SSI enabled (via mod_include instead of mod_cgi). That's where it'll get ya.

    -- Randal L. Schwartz, Perl hacker

Re (tilly) 1: Security question
by tilly (Archbishop) on Oct 09, 2000 at 14:43 UTC
    A few minor points on file uploads.

    First of all the CGI.pm docs are particularly bad on this point. They do not even die on failed opens. In addition when done right you should be sure to address issues such as race conditions and not trusting user input for filenames.

    And speaking of not trusting user input, the following Phrack article should give some food for thought on how easy it is to go very, very wrong. And note that simple taint checking may or may not help, often people just pass virtually anything through rather than thinking about why the check is there and what can happen.

    Among other things it will show you why security concious programmers are likely to explicitly:

    open (FOO, "<$foo") or die "Cannot read $foo: $!";
    rather than just trusting the filename...
Re: Security question
by AgentM (Curate) on Oct 09, 2000 at 07:45 UTC
    No, text/plain will not be driven through any script scans including php, perlscript, etc. Generally, webservers only scan .shtml files, .php files, etc.
    AgentM Systems or Nasca Enterprises is not responsible for the comments made by AgentM- anywhere.
(Ovid) Re: Security question
by Ovid (Cardinal) on Oct 09, 2000 at 19:32 UTC
    Thanks for all of the input. This is a misapprehension that I've labored under for a while.

    In fact, now that I'm rereading the source for that SSI information, it specifically lists this as a problem with scripts that create static HTML pages.

    tilly: thanks for the link. A hardcopy is sitting on my desk now. Amusingly enough, that comment about "they do not even die on failed opens" is particularly frustrating. Two days ago, I went down to an technical bookstore and scanned about 4 books dealing with Perl and CGI. Not one of those books were consistently checking return codes on file opens. That, of course, is in addition to all of the typical problem: no strict, -w, or -T. And these people are touting themselves as professionals!!! Some of them clearly know Perl better than I (which isn't hard to believe), so it was dismaying to see such dangerous programs being listed.

    It's a sad, sad, world.

    Cheers,
    Ovid

    Join the Perlmonks Setiathome Group or just go the the link and check out our stats.

      When I first saw that I was shocked and dismayed.

      Then I thought about it.

      The problem is fundamental. One of the great shortcomings of a CGI environment is that there is not a great standardized error reporting scheme. If you die, that only gives an informative message if you have CGI::Carp or some equivalent installed. Is it the place of CGI.pm to discuss where to find your error logs? Another solution is centralized error reporting, but that is a site decision.

      Without a standardized way to display meaningful errors, there is no good way to trap them. And CGI.pm cannot assume a good standardized way to display them. Hence there is a catch-22.

      What I think is a good solution is to somewhere have a good online tutorial and then have CGI.pm point out the issue and direct people to that tutorial. Said tutorial will need to discuss options for error reporting and decide on one very early. Then use it consistently.

      The books, OTOH, have no excuse. A book is a format which (like a tutorial) can cover error reporting early, settle on an option, then use it consistently in the examples.

      Almost any book with both "PERL" and "CGI" in the titles (yes, capitalization is significant here) is worthless. Or if they call it "Perl 5". Instant clue of lack of clues.

      -- Randal L. Schwartz, Perl hacker