BUU has asked for the wisdom of the Perl Monks concerning the following question:

Ah yes. What a fun question :)

First off, let me say that I realize there is going to be some risk no matter what I possibly do. So please don't reply to say "don't do it" unless you have some specific reason that you want to point out and thats "deal breaker", as it were.

Anyways, disclaimer out of the way, here is the gist of the idea. I have a "server process" , that receives untrusted perl code I want to run. This perl code should only affect stuff in the server process, so it should only need to print to a special file handle. Heres my steps:
  1. The server process forks, passing a copy of the untrusted perl code to the child.
  2. The child sets some really strict RLIMITs, such as RLIMIT_CPU to say, 2 or 3, RLIMIT_NPROC, to 2 or 3, and RLIMIT_VMEM to a suitabliy low amount, along with what ever other rlimits seem needful
  3. The child chroots and chdirs to an empty, unimportant folder someplace
  4. The child changes it's user id to a user with basically no permissions, anywhere
  5. The child creates a Safe compartment, and disallows everything dealing with the system, such as system, open, chdir, etc.
  6. The untrusted perl code is then reval'd inside the safe compartment.


Anyone see any major holes I'm missing? As far as I can tell, the rlimits and chroot should catch basically everything, since the untrusted code won't be run as root or have any way of getting root, it won't be able to break the chroot or modify the rlimits.

Replies are listed 'Best First'.
Re: Running untrusted perl code
by tachyon (Chancellor) on May 30, 2004 at 11:55 UTC

    I assume you are talking about letting people use a browser based perl execute widget for some sort of tutorial purposes? Each CGI is a separate process so you could still consume all your resources quite easily I would think. 2% at a time x 50 times == 100 %

    use LWP::Simple; get( 'http://domain.com/cgi-bin/safe.pl?code=fork+while+1;dump' ) for +1..10000;

    Code intentionally partially invalid

    cheers

    tachyon

      Heh, congratulations, you've managed to come up with the same damn problem I came up with thinking about this last night. The best solution I can think of, at the moment, is to require some sort of authentication with the server, so you have to create an account, and do some sort of process limit at the user level.

      The script in question isn't going to be a CGI, it's going to be a dedicated perl script that runs the untrusted perl code. Mostly.

      The only trick there of course, is to prevent one "user" from have a large number of accounts, which I confess, I'm a tad stumped..
        So impose an artificial limit on the maximum number of processes period. That way if you get 100 users trying to test stuff at the same time (and user limit is 1), and you the max limit is 20, 80 of them will be informed to try again later :)

        MJD says "you can't just make shit up and expect the computer to know what you mean, retardo!"
        I run a Win32 PPM repository for perl 5.6.x and 5.8.x -- I take requests (README).
        ** The third rule of perl club is a statement of fact: pod is sexy.

Re: Running untrusted perl code
by graff (Chancellor) on May 30, 2004 at 17:34 UTC
    1. The server process forks, passing a copy of the untrusted perl code to the child.

    Given tachyon's very apt reply, the question becomes: which server are you talking about in point 1? If it's the web server process, then tachyon is right, and this is a bad idea regardless of the contraints you try to place on a given child process.

    But if there is a dedicated server, whose sole purpose is to receive requests that contain code to be executed in a safe environment, then you have a chance of controlling how many children can be active at any one time.

    Maybe a web service could use this sort of setup by taking requests from clients and passing these on to a dedicated script-runner server, then looking for some sort of feedback from that server as to the result of the request (e.g. it was rejected, it was queued to run as soon as current the current child(ren) is(are) done, it is going to run now, etc). You'd need to cover the extra complications of keeping track of where to send results of child processes, given that they've been done apart from the web server -- I'm actually not clear on how that could be done...

    For that matter, if you could figure some way for the web server to keep track of how many children are in progress, then that could suffice.

    I'm not familiar with tweaking process limits at run-time, so I'd have to ask what sort of limit setting will stop a script that goes into an infinite loop like while(1) { do_something_minor; sleep 1; }

      The server in question is going to be a special server, mostly dedicated to running untrusted perl code. So hopefully I can do something to prevent one person from running thirty processes. I suspect I'll require some sort of auth and just prevent a specific person from running more then one process.

      As for the process limits, The two main ones I'm thinking of are
      • RLIMIT_CPU
      • RLIMIT_VMEM
      RLIMIT_CPU will stop anything that tries to run for too long (and hopefully use up too much of the cpu, although it might need some more restrictrictions) while RLIMIT_VMEM will stop a script from using up too much "virtual memory" (ram + swap and so forth). As to a script that sleeps for along time, well, at the moment nothing will catch it, except that if a specific user's scripts are all sleeping for a long time, he won't be able to run any more scripts, assuming I implment some sort of "process limit" at a user level. Eventually the script in question should accumulate enough cpu time that the rlimit will kill it. (Even if it takes a long time to accumulate that much cpu time, it's obviously not doing anything to affect the box if it's sleeping all the time)
        Don't forget to provide a means for users to kill the jobs that they submit. That would be a useful feature for all concerned.

        Not knowing what range of tasks the untrusted perl code is expected to address, I wonder whether there might be a dilemma between keeping the server safe versus providing an adequate range of resources to support meaningful tasks (e.g. access to non-core modules? user-specifiable input data? debugging mode?) Anyway, good luck with it.

Re: Running untrusted perl code
by andyf (Pilgrim) on May 30, 2004 at 20:00 UTC
    I recommend UML
    If you are really serious about security for running untrusted code on a system try it. Far better than any amount of userspace security, chrooting, uid management or thread tracking. I keep a whole bunch of User Mode Linux filesystems of our development and production images, running new code in a VM makes me feel relaxed about what would otherwise be very stressful operations. The most you can lose is a copy of a COW file.
Re: Running untrusted perl code
by Zaxo (Archbishop) on May 30, 2004 at 17:47 UTC

    Your sketch design looks about right for a sandbox application. You may want to consider a special installation of Perl within a chroot tree on a partition of its own. Take a look at Opcode to work with Safe. There is an opcode group called base_io which allows IO operations on filehandles, but not their creation.

    After Compline,
    Zaxo

      Whats the advantage of installing perl in a chroot tres? As far as I can tell, the way I have it set up, since the process is already running it won't need any other files to be inside the chroot, so even if it manages to do file io (Which I don't think is that hard, I wasn't under the impression Safe was that secure) the can't touch *anything*. If theres a perl install, they could put a trojan in there or something, since the original perl server needs to be run as a root to chroot/set rlimits/etc, they could break the chroot and unfun stuff like that.

        The idea is to have a mini-system in the chroot tree and call an effective chroot /path/to/sandbox /bin/script where script actually is in /path/to/sandbox/bin. You will need to set that up, however you call it, suid root, and need to release root privilege before any user code is evaled. A small C program may be the simplest to set up. Any system utils you permit will need to be copied into that tree in their customary locations. They should be static builds unless you want copies of all the needed system dll's.

        Not even root can break out of a properly set up sandbox. The perl installation is needed the same as any other executable. If it's not in the sandbox it effectively doesn't exist for the jailed process.

        If you lack privilege to secure this, you probably shouldn't be doing it.

        After Compline,
        Zaxo