fx has asked for the wisdom of the Perl Monks concerning the following question:
Hello,
I have a few little perl programs which watch system logfiles (eg. /var/log/maillog) and log certain things into a database. However, these little programs have been known to just suddenly die in the past.
My company is becoming more reliant on these programs (they were initially simple testing tools) and want to monitor them so that they can be restarted if they die. However, they wish to control this from their 'network monitor' machine.
The 'network monitor' sits on a machine and watches various services running on various servers using TCP, UDP or ICMP tests. I have been given the challenge to get my programs to work with this setup.
My current suggestion is to fork/thread/whatever a little TCP or UDP client from my programs which would then respond to the monitoring machine's requests. In people's opinions, how good is this solution?
I suppose my other approach would be to ignore that the 'network monitor' exists and run my own monitoring script which would restart the programs should they die. One way to do this would be to simply watch the output of 'ps', but I'm after a more Perl-based solution. Comments?
fx
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: External monitoring of a Perl program
by blakem (Monsignor) on Sep 18, 2001 at 03:02 UTC | |
by dthacker (Deacon) on Sep 18, 2001 at 21:10 UTC | |
|
Re: External monitoring of a Perl program
by traveler (Parson) on Sep 18, 2001 at 03:16 UTC | |
|
Re: External monitoring of a Perl program
by miyagawa (Chaplain) on Sep 18, 2001 at 04:05 UTC | |
|
Re: External monitoring of a Perl program
by nardo (Friar) on Sep 18, 2001 at 02:59 UTC |