advait has asked for the wisdom of the Perl Monks concerning the following question:

Hi All, I have made a small web based application that accepts some paramters by form elements and submits to a perl script. I want to avoid the cases where webinterfaces are bypassed and directly the perl script is called via other perl script. please give some idea how to stop such a thing

Replies are listed 'Best First'.
Re: avoid by pass of web interface
by jZed (Prior) on Aug 08, 2007 at 00:14 UTC
    You can't stop people (or scripts) from submitting forms to a CGI script and you can't stop them from putting whatever they want in those forms. You *can* (and should) check all input from forms and validate it against what you expect it to be and only accept forms with valid input.
Re: avoid by pass of web interface
by derby (Abbot) on Aug 07, 2007 at 23:11 UTC

    You need to further explain what you want. If you want to ensure a human is using your web app and not some type of bot scraper, take a look at the captcha modules.

    -derby
Re: avoid by pass of web interface
by wind (Priest) on Aug 08, 2007 at 01:34 UTC
    Move your secondary perl script outside of docroot or cgi-bin of your web server. Then, your script will only be accessible by the web interfaces that you design.

    - Miller
Re: avoid by pass of web interface
by ysth (Canon) on Aug 08, 2007 at 03:45 UTC
    When run under the web server, your script will get information from a number of environment variables in a way specified by the common gateway interface (CGI) protocol. You can check that these are set as you would expect, but since a perl script invoking your script directly could also have set them, it isn't infallible.

    First is QUERY_STRING; perl's CGI module uses this to get param information, but it will also by default look on the command line or STDIN. Turn this off with "use CGI -no_debug;" and verify that parameters you expect are present.

    Verify that $ENV{REQUEST_METHOD} is GET or POST (whichever your form specified). It's also available from the CGI module as request_method(); other variables are similarly available - see CGI.

    Check that HTTP_REFERER (sic) is the page containing your form.

    That should stop most casual attempts to run your script. You can also set permissions so that only the user the webserver runs as has access to the script.

    To further secure things, use a captcha image on your form page and/or a session (via cookie, parameter, or path).

Re: avoid by pass of web interface
by erroneousBollock (Curate) on Aug 08, 2007 at 05:11 UTC
    I ignored this thread when it came up but it did get me thinking later.

    I'm aware that no scheme to force the use of a 'WEB GUI' will be 100% effective, but I thought it might make an entertaining thought exercise.

    First thing that came to my mind was that if you've made a nice fat Javascript/forms front-end for some REST/SOAP back-end... then it's all too easy to talk to the back-end once you know the urls and schemas... so basically avoid writing the clean/sexy type of fat GUI that is being currently espoused in AJAXy circles.

    Then I thought.... "what is the minimum standard for 'normal' use of the web interface?". Thinking about that makes you figure out exactly which features the client's web browser must have to pass your test.

    You could do something nasty like:

    1. have each link/submit URL be generated by client-side Javascript
    2. that url-generating Javascript is patched every time the server returns a new page
    3. if you have multiple AJAX-style requests going simultaneously, then you'll have to implement some sort of 'patch queue' to keep it all straight ;)
    4. all of that could be spoofed by some perl script which embeds a Javascript interpreter, so perhaps you'd need to use some tricky DOM/Canvas manipulations in your Javascript from which your code can extract hard-to-anticipate browser-computed values (very fragile ;)

    How far could this be taken? :D

    -David

Re: avoid by pass of web interface
by moritz (Cardinal) on Aug 08, 2007 at 10:18 UTC
    You can only achieve this securely with your operating system's permission system.

    For example your CGI script could generate random session IDs and store them in a database or flat file and passes it to your non-CGI script, and your non-CGI script checks that resource (DB/file) if that session ID is present, and deletes it (to ensure it's used only once).

    But if you want the scheme to be secure, you have to make sure that no other user on the system can read or write that file/DB - and that's the task of your operating system. For example on unix you'd have to run the apache process that starts the CGI under a special user, and all other users are not allowed to read/write that file/DB.

    Thinking a bit more of it, perhaps you can just check the user ID - nobody but root and Apache can run a script under Apache's UID. (If your potential attacker has root access, you're lost anyway).