You can use mod_perl or FastCGI to limit cgi runtime, reload progs against memory leaks, limit the number of scripts to be running at any time, load shared objects, preload CGIs (almost like multithreading) and reduce server load. Nowadays, there is no reason to go without such insanely useful Apache mods. My argument against the alarm is that it is A solution but not the best solution. The best is simply to remove the infinite loop or memory leak. If you just throw in an alarm, your script will be killed, perhaps for now reason at all while you searching desperately for the error. Also, this is not portable in the sense that the script may take 10 seconds on computer #1 and 400 seconds on computer #2. This is not good. Throwing in dependence on time is not a good way to ensure user-friendliness. Using the FastCGI mechanism rather than alarm also is smarter, since FastCGI is not script dependent but server dependent and the server isn't likely to be moving around much, thus the same functionality via a better channel.
AgentM Systems nor Nasca Enterprises nor
Bone::Easy nor Macperl is responsible for the
comments made by
AgentM. Remember, you can build any logical system with NOR.
| [reply] [Watch: Dir/Any] |
Much agreed with you on your cases here, but frequently someone looking to do this isn't necessarily the admin of the site in question. Or is the admin but can't install new software due to PHB-based/marketroid restrictions. Or the OS in question doesn't support module X. Or the server isn't Apache. I'm assuming (yeah, I know what happens when I do this..) that one of these is the case since silicon39 mentioned that they are still running perl4. So, even if it isn't the best way, as you said, it is a way, and could under some circumstances be the best way.
-marius
| [reply] [Watch: Dir/Any] |
If you know you'll be running on a linux system you could
look at getrlimit(2) and setrlimit(2). Assuming that
syscall.ph has been correctly h2ph'd you should be able
to get this to work from within perl. All you'd need to
do then is to set your RLIMIT_CPU to, say, 1 second and
your script will be killed by the kernel if it uses more than that.
The advantage is that it's dependent on how much CPU time
your script uses. If the server is busy with
some other process your script may run for a lot of
wall-clock time, but it will not be killed prematurely.
The same functionality no doubt exists in other systems, but
this is not going to be the most portable implementation
in the world.
We used this to good effect within the suexec wrapper for
Apache - at the same time you can prevent memory hogs and
fork() bombs, and it applies to all CGI scripts on the
server.
| [reply] [Watch: Dir/Any] |