in reply to Running untrusted perl code

1. The server process forks, passing a copy of the untrusted perl code to the child.

Given tachyon's very apt reply, the question becomes: which server are you talking about in point 1? If it's the web server process, then tachyon is right, and this is a bad idea regardless of the contraints you try to place on a given child process.

But if there is a dedicated server, whose sole purpose is to receive requests that contain code to be executed in a safe environment, then you have a chance of controlling how many children can be active at any one time.

Maybe a web service could use this sort of setup by taking requests from clients and passing these on to a dedicated script-runner server, then looking for some sort of feedback from that server as to the result of the request (e.g. it was rejected, it was queued to run as soon as current the current child(ren) is(are) done, it is going to run now, etc). You'd need to cover the extra complications of keeping track of where to send results of child processes, given that they've been done apart from the web server -- I'm actually not clear on how that could be done...

For that matter, if you could figure some way for the web server to keep track of how many children are in progress, then that could suffice.

I'm not familiar with tweaking process limits at run-time, so I'd have to ask what sort of limit setting will stop a script that goes into an infinite loop like while(1) { do_something_minor; sleep 1; }

Replies are listed 'Best First'.
Re: Re: Running untrusted perl code
by BUU (Prior) on May 30, 2004 at 21:27 UTC
    The server in question is going to be a special server, mostly dedicated to running untrusted perl code. So hopefully I can do something to prevent one person from running thirty processes. I suspect I'll require some sort of auth and just prevent a specific person from running more then one process.

    As for the process limits, The two main ones I'm thinking of are
    • RLIMIT_CPU
    • RLIMIT_VMEM
    RLIMIT_CPU will stop anything that tries to run for too long (and hopefully use up too much of the cpu, although it might need some more restrictrictions) while RLIMIT_VMEM will stop a script from using up too much "virtual memory" (ram + swap and so forth). As to a script that sleeps for along time, well, at the moment nothing will catch it, except that if a specific user's scripts are all sleeping for a long time, he won't be able to run any more scripts, assuming I implment some sort of "process limit" at a user level. Eventually the script in question should accumulate enough cpu time that the rlimit will kill it. (Even if it takes a long time to accumulate that much cpu time, it's obviously not doing anything to affect the box if it's sleeping all the time)
      Don't forget to provide a means for users to kill the jobs that they submit. That would be a useful feature for all concerned.

      Not knowing what range of tasks the untrusted perl code is expected to address, I wonder whether there might be a dilemma between keeping the server safe versus providing an adequate range of resources to support meaningful tasks (e.g. access to non-core modules? user-specifiable input data? debugging mode?) Anyway, good luck with it.

        Providing a means to kill runaway processes is definately a good idea, I'll have to see exactly what is needful.

        Perhaps I should have explained further what I'm contemplating with the use of this code. The main point is to provide a perl scripting language for a MU* type of system, so the "tasks" performed by the various bits of code should be really fairly simple, and not need to use any "outside" resources.