This item is perhaps worthy of some special consideration:
Making this special "watch over the system" a server daemon is worth considering. A daemon is basically a continually running process without any console I/F. Typically a process like this needs to maintain some state information about the current system status. Think about how many emails might get sent if some node fails. Often bombarding an email account with a bazillion messages saying the same thing is not productive. Often a "throttle" on repeated messages is desired. If this process is running in memory, you can keep the state information as an in memory table instead of some other method like a disk file. I would write a simple client to talk to this thing, with one command, "status".
If I am the recipient of 1 or 500 emails from this "watcher process". My actions will be the same. Fire up my client program, check current status and do what I can to get the system running again right now. Investigation of why this happened can take hours or days or even weeks.
A simple Perl program can run for years without memory leaks, provided that you pay attention to this "simple" part.
Unix is very efficient compared with Windows in terms of starting new processes. I wouldn't overly concern yourself about that. Except for perhaps this "watcher" program, a chron job looks fine.
Update:
I see that you have 2 tasks that involve the REST API. Consider the "Least Common Denominator". It could be that you can also run the "need once per day" query every 15 minutes? Maybe it doesn't matter in terms of performance? If it doesn't then "so what?". There can be something said about simplifying the code at the expense of minuscule performance gains.
In reply to Re^2: Long-running automation tasks
by Marshall
in thread Long-running automation tasks
by bfdi533
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |