Re: Limiting script cpu time
by sgifford (Prior) on Jun 10, 2003 at 01:39 UTC
|
If you just want to stop it from hogging the CPU, using nice(1) or renice(8) will probably do what you want. There's probably a more Perlish way that CPAN will reveal.
| [reply] |
|
|
| [reply] [d/l] |
Re: Limiting script cpu time
by mr_stru (Sexton) on Jun 10, 2003 at 04:48 UTC
|
Proc::NiceSleep on CPAN will give you access to nice. However nice doesn't actually limit how much CPU time the process uses but more how much priority the process gets related to others.
What this means is that if there's nothing else running on the system then your niced process will still use as much CPU time. If something else is running though it your process will get a lower priority for CPU time. This may or may not be what you want.
And it should be noted that arguments to nice aren't consistent from unix to unix so check your system's manpages for information.
Caveat: I've not used the module myself.
There's also Apache::LoadAvgLimit which you might be able to borrow code from although that just looks like a similar solution to the check system load and sleep if it's too high solution proposed above. It doesn't stop the script hammering the CPU in between each sleep so it's probably not as useful a solution as nice.
Struan
| [reply] |
|
|
FreeBSD also has builtin support for per user process memory limiting, etc. The configuration for this system is stored in /etc/login.conf
| [reply] |
Re: Limiting script cpu time
by thor (Priest) on Jun 10, 2003 at 01:12 UTC
|
IIRC, you can use the ulimit Unix command to limit cpu time, among other things. Just type man ulimit for more info. Or, for a more perlish solution:
#!perl
$SIG{ALARM} = sub {die "Time Out!"};
alarm(120); #this script will run for at most 120 seconds
#your script goes here
thor | [reply] [d/l] [select] |
|
|
i think he/she's after not taking up too much % cpu time; not to kill the script after it's done half the image. this can be achieved with "nice" unix command. i don't know of a module that can do same from within perl. there is a module kstat which will give you current cpu load for processes, so you could parse that for your process id and do a few waits if high
| [reply] |
Re: Limiting script cpu time
by DrHyde (Prior) on Jun 10, 2003 at 09:34 UTC
|
You can use the nice(1) and renice(1) commands to change the priority of your process, but that might not be what you want. I was trying to do something similar recently - I wanted a process to use less than the resources available to it, cos the machine ran too hot otherwise. The solution was to micro-sleep. See the Time::HiRes module and the documentation for the select built-in function. | [reply] |
Re: Limiting script cpu time
by bm (Hermit) on Jun 10, 2003 at 11:36 UTC
|
You will probably find this thread
a very interesting read. | [reply] |
Re: Limiting script cpu time
by zentara (Cardinal) on Jun 10, 2003 at 15:01 UTC
|
Some fine monk wrote this snippet a few months ago.
#!/usr/bin/perl
#Description: These subs allow you control how much % CPU maximum
#will use your script. CPU_start() must be called once when you script
+ start.
#This example script will use 30% CPU until Ctrl-C pressed:
CPU_start(); CPU_max(30) while 1;
use Time::HiRes qw(time);
sub CPU_used {
(map {$_->[13]+$_->[14]}
[split " ", Cat("/proc/self/stat")])[0]
}
{ my %start = (tm => 0, cpu => 0);
sub CPU_start { @start{"tm","cpu"} = (time(),CPU_used()) }
sub CPU_max {
my ($max, $real, $cpu) = ($_[0], time()-$start{tm},
CPU_used()-$start{cpu});
return unless defined($max) and $max > 0;
&sleep( $cpu/$max-$real );
}}
#
# macro used from CPU_used() and CPU_max()
#
sub sleep { select undef,undef,undef,$_[0] }
sub Cat {
local *F;
open F, "< ".$_[0] or return;
local $/ unless wantarray;
return <F>;
}
| [reply] [d/l] |
Re: Limiting script cpu time
by sdbarker (Sexton) on Jun 10, 2003 at 22:12 UTC
|
Sorry for the confusion.
What bring this question about, is that I've recently learned that my webhost kills any process that takes up more than three cpu seconds. Thats what I need to work around, so that my script doesn't just up and die.
-Scott | [reply] |
|
|
Nothing posted here so far will help with that. If the 3 CPU-second limit is for only one process, you can avoid it by breaking up the job into several smaller jobs, then executing all of them; or by monitoring how much CPU time you've used, and right before you use your quota up simply forking a new process and continuing the work there. Here's a small code snippet that demonstrates this forking; note that it abuses signal handlers, and so may not work on all platforms. It generates 10,000,000 pseudorandom numbers, printing every 1,000,000th. It should run the same with or without forking.
The fun part about this approach is that it's actually much harder on the server than if they just let you bypass the stupid limit. If you're lucky, they'll notice this and work with you to make your and their lives better. If you're unlucky, they'll just shut off your account.
If the limit is on that child and all of its children, you're pretty much screwed.
Apart from hack value, though, the real solution is to make your code more efficient, or else find a hosting provider that better meets your needs.
| [reply] [d/l] |
|
|
It's not so much my code that's inefficient, as it is the two Image::Magick calls for resizing and writing the image.
Thanks for your help everybody. I really appreciate it. I'll look into abusing signal handlers and forking, and see if I can't get them to let me use a little more cpu time. Though, that doesn't really help out if this eventually goes to other places that also impose cpu restrictions.
-Scott
| [reply] |
|
|
what about using sleep() ?
or Time::HiRes::sleep() ?
| [reply] |