Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re: Re: Perl infinite loops!

by marius (Hermit)
on Dec 06, 2000 at 21:57 UTC ( [id://45254]=note: print w/replies, xml ) Need Help??


in reply to Re: Perl infinite loops!
in thread Perl infinite loops!

wouldn't a high $x from alarm($x) be worthwhile to look at? If you know what your script does, ideally you know how long it "should" take to run the script, multiply that number by 5 or 10 to account for high server load, and voila.

I'm curious as to what duration of time the CGI has to run in infi-loop until Apache Zombies the process. I know I've accidentally done that, realized it after about 30-45 seconds, then popped onto the server to SIGKILL the offending perl process.

-marius

Replies are listed 'Best First'.
Re: Re: Re: Perl infinite loops!
by AgentM (Curate) on Dec 06, 2000 at 22:14 UTC
    You can use mod_perl or FastCGI to limit cgi runtime, reload progs against memory leaks, limit the number of scripts to be running at any time, load shared objects, preload CGIs (almost like multithreading) and reduce server load. Nowadays, there is no reason to go without such insanely useful Apache mods. My argument against the alarm is that it is A solution but not the best solution. The best is simply to remove the infinite loop or memory leak. If you just throw in an alarm, your script will be killed, perhaps for now reason at all while you searching desperately for the error. Also, this is not portable in the sense that the script may take 10 seconds on computer #1 and 400 seconds on computer #2. This is not good. Throwing in dependence on time is not a good way to ensure user-friendliness. Using the FastCGI mechanism rather than alarm also is smarter, since FastCGI is not script dependent but server dependent and the server isn't likely to be moving around much, thus the same functionality via a better channel.
    AgentM Systems nor Nasca Enterprises nor Bone::Easy nor Macperl is responsible for the comments made by AgentM. Remember, you can build any logical system with NOR.
      Much agreed with you on your cases here, but frequently someone looking to do this isn't necessarily the admin of the site in question. Or is the admin but can't install new software due to PHB-based/marketroid restrictions. Or the OS in question doesn't support module X. Or the server isn't Apache. I'm assuming (yeah, I know what happens when I do this..) that one of these is the case since silicon39 mentioned that they are still running perl4. So, even if it isn't the best way, as you said, it is a way, and could under some circumstances be the best way.

      -marius
      If you know you'll be running on a linux system you could look at getrlimit(2) and setrlimit(2). Assuming that syscall.ph has been correctly h2ph'd you should be able to get this to work from within perl. All you'd need to do then is to set your RLIMIT_CPU to, say, 1 second and your script will be killed by the kernel if it uses more than that.

      The advantage is that it's dependent on how much CPU time your script uses. If the server is busy with some other process your script may run for a lot of wall-clock time, but it will not be killed prematurely.

      The same functionality no doubt exists in other systems, but this is not going to be the most portable implementation in the world.

      We used this to good effect within the suexec wrapper for Apache - at the same time you can prevent memory hogs and fork() bombs, and it applies to all CGI scripts on the server.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://45254]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (5)
As of 2024-03-19 08:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found