dez has asked for the wisdom of the Perl Monks concerning the following question:

Hi! I got a serious problem. I've got this software that executes a simple C-programs suplied by random internet users, checks it's output etc. and these will sometimes go to endless loop and entire system is in danger when some dumbass submits a program that will write "fuck you" to a file in endless loop. I have a script for the termination of these processes that works quite nicely, but when the C-program prints something to a file it (cause of a reason I don't know) cannot be killed... what can I do...can I limit the CPU-usage for some process or what? (I think that the 99,9% CPU-usage is the reason why the parent can't kill the child.
  • Comment on killing process...or limiting it's cpu time

Replies are listed 'Best First'.
Re: killing process...or limiting it's cpu time
by belg4mit (Prior) on Jan 10, 2002 at 13:31 UTC
    Oh man, I wouldn't touch that with a forty-foot pole, major security issues.

    But for the question you asked about, try forking before execing the C program (perhaps via nice). Then sleep, and then kill the child. See perlipc for good examples of the usage of fork.

    --
    perl -pe "s/\b;([st])/'\1/mg"

Re: killing process...or limiting it's cpu time
by MZSanford (Curate) on Jan 10, 2002 at 14:00 UTC
    belg4mit is sooooo right about the security risk ... but if you mst do this, i suggest the following :
    • Run the code as an unprivileged user
    • chroot the unprivileged user to a specific dir
    • If this is linux, check the internet for per-user CPU limitation.

    $ perl -e 'do() || ! do() ;' Undefined subroutine &main::try
Re: killing process...or limiting it's cpu time
by snapdragon (Monk) on Jan 10, 2002 at 15:59 UTC
    Well it could be that I've misunderstood this but..... it seems to me that the core of the problem is this process endlessly writing to a file. If that's the case possibly try using ulimit and then forking the process. This should set the file-size writing limit imposed on files written and its child processes (files of any size may be read). Only a process with appropriate privileges can increase the limit.

    Like I said I could be way off here.... but that would not be the first time ;-)

Re (tilly) 1: killing process...or limiting it's cpu time
by tilly (Archbishop) on Jan 10, 2002 at 17:20 UTC
    Another option for a secure sandbox is to look at UML. Not really a Perl answer, but running possibly offending processes in a virtual OS should give you all of the protection that you want.
Re: killing process...or limiting it's cpu time
by n3dst4 (Scribe) on Jan 10, 2002 at 17:16 UTC
    Everyone else has said things about bargepoles and security holes, so I won't re-iterate - but may I say "chroot"? It's not absolutely designed for security, but it beats the alternative.

    If we're going to assume you really have to do this (why? the curiosity is killing me!) the only way I know to limit resources absolutely is on a per-user basis on most OSes. This varies from OS to OS so ask your friendly local sysadmin.

    Aside from that, you can always nice() a process so your controlling process will be able to kill it. Remember that if they're evil, they've installed sig handlers so you'll have to kill -9 them (this is unix speak, I don't know the equivalent on Windows).

    Good luck, and please tell us why you're doing it.

Re: killing process...or limiting it's cpu time
by metadoktor (Hermit) on Jan 10, 2002 at 14:06 UTC
    Don't do it! This is such a bad security risk. Why would you let any random user run an arbitrary program on your machine?

    metadoktor

    "The doktor is in."