in reply to Running untrusted perl code
Given tachyon's very apt reply, the question becomes: which server are you talking about in point 1? If it's the web server process, then tachyon is right, and this is a bad idea regardless of the contraints you try to place on a given child process.
But if there is a dedicated server, whose sole purpose is to receive requests that contain code to be executed in a safe environment, then you have a chance of controlling how many children can be active at any one time.
Maybe a web service could use this sort of setup by taking requests from clients and passing these on to a dedicated script-runner server, then looking for some sort of feedback from that server as to the result of the request (e.g. it was rejected, it was queued to run as soon as current the current child(ren) is(are) done, it is going to run now, etc). You'd need to cover the extra complications of keeping track of where to send results of child processes, given that they've been done apart from the web server -- I'm actually not clear on how that could be done...
For that matter, if you could figure some way for the web server to keep track of how many children are in progress, then that could suffice.
I'm not familiar with tweaking process limits at run-time, so I'd have to ask what sort of limit setting will stop a script that goes into an infinite loop like while(1) { do_something_minor; sleep 1; }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Running untrusted perl code
by BUU (Prior) on May 30, 2004 at 21:27 UTC | |
by graff (Chancellor) on May 31, 2004 at 01:46 UTC | |
by BUU (Prior) on May 31, 2004 at 04:55 UTC |