Hi,
Ok, this is the current situation:
I have a set of daemons that are running. And I have one perl script that is controlling these daemons. I didn't write this controlling script, but I am questioning it's architecture.
Each daemon is executed like:
./name_daemontool.pl -id=UNIQUEID
The process that is controlling it is managing it by a simple command like:
`ps -ef | grep '_daemontool.pl'`
And then using that and finding UNIQUEID and therefore be able to define if a daemon is running or not, and with that info take proper action.
HOWEVER: I think this is a weak approach. As in for example a RedHat environment where the COLUMNS env is set to something smaller, all info wouldn't be presented. That mean that daemons sometimes are believed to be down, and restarted, and this is done often, so after a while 1000+ daemons are running, and causing abnormal usage of system resources.
---
My hope of how this could be handled:
A daemon controlling process daemon_controller.pl would be daemonized and launch the name_daemontool.pl as a child process of it self. If the name_daemontool.pl dies, daemon_controller would get notified, and it can take proper actions from that...
---
My question: What would the best approach be if we need to control this set of *_daemontool.pl only by being able to: track if they died, start them, stop them.
This might be too broad question, but hopefully someone with similar experience might be able to share how they solved this need.
Best regards,
Peter Lauri
In reply to Control other processes by Tronen
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |