in reply to getting around an ISPs processing cap

The suggestions above about using a checkpoint file are good. But as a cheap hack can you just fork periodically and kill the parent after the fork leaving the child running the same thread of execution? I am not sure if they count parent time against a child process. Also this may not work depending upon what you need to keep open across the fork.

my $fork_period = 30*60; # 30 minutes my $forktime = time + $forkperiod; while (1) { # Do my happy processing. # This must loop periodically (at most once a minute) # otherwise you may badly miss your forktime and get # killed. # We don't want to fork too often (or they might think # we are a runway and kill us). if (time > $forktime) { # Kill the parent exit if fork(); $forktime = time + $forkperiod; } }

BTW they might be using resource limits (man getrlimit or ulimit) to kill the process. That may be smart enough to go across children.

-ben

Replies are listed 'Best First'.
Re: Re: getting around an ISPs processing cap
by merlyn (Sage) on May 09, 2001 at 17:49 UTC
    Actually, that's a pretty good technique (which I was thinking of before I scrolled down to see yours!).

    The limit stuff applies per process, so the kids would get a reset counter for all of them.

    About the only thing this messes up is the "who's yer daddy?" situtation. In fact, they'd be children of init(8), so they'd charge not to your process, but to the great granddaddy when they died. Pretty slick.

    If the trigger was strictly on CPU time, you could just do this:

    use List::Util qw(sum); while (1) { if (sum(times) > 30) { # have I used more the 30 CPU seconds? fork and exit; # carry on, my wayward son, there'll be peace when +you are done... } ... rest of processing here ... }

    -- Randal L. Schwartz, Perl hacker

Re: Re: getting around an ISPs processing cap
by geektron (Curate) on May 10, 2001 at 00:10 UTC
    BTW they might be using resource limits (man getrlimit or ulimit) to kill the process. That may be smart enough to go across children.

    hmm...  man getrlimit says:

    Limits on the consumption of system resources by the current process +and each process it creates may be obtained with the getrlimit() call +, and set with the setrlimit() call.
    sounds like child processes will get nuked as well. ( i know the ISP's using FreeBSD, and so am i. man page straight outta FreeBSD 4.2 ).

    this is exactly the kind of stuff i'm worried about

    i can't get in an change the frequency of the crontab, or anything remotely useful. ( ISP requires a cron.sh file - which contains jobnames -- , and THEY control the crontab )

    so forking won't really be an option . . .

    i need to do more research on how the ISP is doing the resource limiting, i guess. I was hoping for a quick way out of the problem.

    BTW -- i'm not talking long limits. the script gets killed in a matter of minutes. single digit minutes.