I'd like to thank you for sharing your code, but your program isn't so much a 'spider' and a 'monitoring' tool. That is, it doesn't recursively go through the pagess, to find other links to follow -- it only looks for a single page (or a limited list of single pages), and checks that the page in question isn't generating an error.
There are plenty of free "link checkers" available, such as from w3c and Linklint (written in Perl, and open source).
If you're going to use something for monitoring, you might want to verify that the page is the same as a known good copy (or falls within tolerance of a good copy, if you are monitoring a dynamic page), as there are many things that can go wrong without generating an error. (eg, being served the apache default 'new server' page would be a 200)
In reply to Re: Reinventing the wheel
by jhourcle
in thread Reinventing the wheel
by bageler
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |