I would strongly suggest using cron/scheduled tasks/other cron like things for this. For one main reason: that's what cron is there for. Cron and cousins have been around for a long time and relied upon for critical things (i.e. backups). Cron is very bulletproof. If your program dies - it dies. If a cron job dies.... it happily starts at the next iteration (if it can)... with some nice logging to boot.
The other reason is that cron is the idiom for *nix jobs like this. Once some one says "Well this is supposed to happen every 30 mins" I (and I imagine every other sysadmin) go running for the crontabs. No reason to obscure this in a program.
I would not reinvent the wheel on this one.
grep
grep> cd pub
grep> more beer
|
| [reply] [d/l] |
As far as I can see, you shouldn't have problems with memory leaks here. However, instead of a "loop: ... goto loop;" construct, I'd put the entire thing inside a "while(1) { ... }" loop.
Also, unless there's some reason for the response time, I'd increase the sleep to at least 60 seconds.
Thanks,
James Mastros,
Just Another Perl Scribe
| [reply] |
What I tend to do when faced with a problem like what you describe (< 1 min intervals, eternal life-span), is combine what you did and cron. In order to avoid nasty effects of eventual memory leaks and flush things, I code an absolute maximum of iterations, to the tune of:
while (++ $tries < $any_number_you_wish) {
# You code here
}
exit;
and then have cron running this daemon periodically. Also, be sure to take a look at Proc::Daemon for detaching from your controlling tty, which is always nice for a daemon to do. You might also want to look at Proc::Watchdog for keeping a flag telling wether your daemon is running or not.
With a bit of code, you can have your daemon update a file containing its pid while it is running. At the beginning of its execution, it may look for this file and attempt a dummy kill to see if it's already there, dying in this case. Otherwise, processing begins for a given number of iterations.
cron could then run your daemon periodically (say, each minute, which would on average cause a 30 seconds outage) to insure it stays up whatever happens. If you go this way, I would defer the detach until basic tests (files and dirs exist, etc) have been performed, so that cron could also be used as a report mechanism. | [reply] [d/l] |
I think it is definately alright to run constant codes, I have done it alot in the past with like daemons listening on sockets and forking processes to handle clients. I tend to use something more like this : while (1) {
opendir (DIR, $watcheddir);
@files=grep {!/^\./} readdir (DIR);
closedir DIR;
foreach $currentfile (@files) {
&filecontents;
}
}
really no need to use the sleep. and GOTO's should be gone with QBASIC :). Something as simple as this, don't worry about CPU usage, I would be ammused to see if this even sparked a blip on the "top" radar.
tradez
| [reply] [d/l] |
Unfortunately, I can't wait for one whole minute. 20-25 seconds would be the longest I could wait. As far as I know cron only allows scheduling in a minimum of 1 minute. | [reply] |
| [reply] |
This could be a really Bad Idea, but it popped into my head anyway...
How about using cron to call the script every minute, and have the script perform the file check/delete a couple times over a forty second period?
I'd be interested in what other monks think about this solution.
| [reply] |
Should be fine ... no biggy.
Might want to kick it off with 'nice'.
Geez ... some people really over think somethings.
| [reply] |