in reply to Distributed URL monitor

I wonder what benefit distributed monitors are providing, over, say, running a single monitor and accumulating subsequent errors until a certain threshold is reached, before firing up an email to alert whoever's watching.

Let's say, there's a network glitch happening between Monitor1 and the target, but not between Monitor2 and the target. It's more than likely that the problem also occurs on the network between Monitor1 and the watcher. So while Monitor2 reports the error, Monitor1 can't get through to the watcher. What to do in this case? How can you decide if it's the monitor's fault, or the network's, or the target's?

Replies are listed 'Best First'.
Re^2: Distributed URL monitor
by Washizu (Scribe) on Jan 28, 2005 at 14:42 UTC

    I think this is a good solution. It's simpler, but that usually means better. You could still set up multiple monitors, but now they'd run independant of each other. They could send out an email saying, "Alert: Target 1 Unreachable" and if you receive more than one email then you can be pretty sure there is a problem.

    You probably already know this, but the simplest way to retrieve information from a URL is like this:

    use LWP::Simple;
    $url = 'http://www.google.com';
    my $page = get $url;

    -----------------------------------
    Washizu
    Odd Man In: Guns and Game Theory