in reply to RE: Distributed network monitor
in thread Distributed network monitor

I agree with your comment about downloading a page from the server.

As you point out, the only thing that Ping tells you is that the machine is switched on and is running ping!

Personally I wish this author would make the code available - I have e-mailed him and not received a reply. It sounds like a seriously cool script. If anyone knows of any similar scripts, please do post details here!

Replies are listed 'Best First'.
RE: RE: RE: Distributed network monitor
by perlcgi (Hermit) on May 06, 2000 at 21:50 UTC
    You might be able to achieve what you want with MRTG. Alternatively, I wrote some scripts a while back to produce this. A cronjob runs an LWP download from each site being measured on the hour, every hour. It simply measures download time from each site, sticks results in a flat file (not even a database). A cgi script calculates site performance averages, and displays a simple graph. So it's pretty lame, does'nt account properly for time-outs, but might get you started. Let me know if you want it.
      Yep...
      MRTG could be a start. At least i know the ISP Singnet in
      Singapore is using it. I am also using it now for my attachment
      place.
      I was asked to link it with html and perl script to make it
      easier to configure thought the internal network without having to utilize
      the VI editor which any windows users might not know how to use.
      Running the crontab will allow you to run the mrtg programme
      when you specify it to run, other than that, you can also
      use the daemon in mrtg together with interval rate in MRTG
      to do the same effect as a crontab will do. OR, a script to edit the crontab will do just fine.
      Please post it, if you don't mind. I'm looking for something like that to prove to a friend that it's his dial-up, not my web-site! :-) -Chuck
        OK, here's the bit that does the measuring. I'm not showing the cgi-script that does the display - I'm too embarassed and don't have the time to make it suitable for public consumption. I do have a little bit of hubris, (and laziness in spades) :-) But if you *really* want it leave your email. perlcgi.
        #!/usr/local/bin/perl -w # Written, as quick 'n dirty hack - so here it is warts and all. # This just retrieves a hash of sites and times how long for each page # to download. Writes timing for each site to tab delimited file/stdou +t. use strict; use LWP::Simple; # I know, better to use UserAgent use Time::HiRes qw(gettimeofday); # Accuracy overkill, maybe? my $thisrun=localtime(time); # When the data was collected #Amend the next line as required or just leave it out #open(OPFILE, ">>/var/logs/whatever") || die "Could not open/var/logs/ +whatever:$!"; # Hash of sites to be timed my %url = (pubmed =>"http://www.ncbi.nlm.nih.gov", proquest =>"http://proquest.umi.com", JSTOR => "http://www.jstor.ac.uk/jstor/", AMAZONUK => "http://www.amazon.co.uk/", AMAZONUS => "http://www.amazon.com/", SPRR => "http://sprr.library.nuigalway.ie", SCIDIR => "http://www.sciencedirect.com/" ); sub process_url { my $url = shift; my $now_time = gettimeofday; my $page = get($url); my $elapsed_time = gettimeofday-$now_time; # print "Site:$url\tTime Taken:$elapsed_time\t Run on $thisrun\n"; # print OPFILE "$url\t$elapsed_time \t$thisrun\n"; } foreach my $key (sort keys %url) { &process_url($url{$key}) } # That's it!