PyroX has asked for the wisdom of the Perl Monks concerning the following question:

Hey Folks,

I have a query, it has been begging me to deal with it.

I, or my associates, have the need for a large-scale search appliance. I would like to end up with functionality similar to that of, but I don't really care who's linking to who and why.

I need to build a spider, it doen't have to be very complicated, basically: open initial page submitted to be crawled, parse the page's output, gather links and image names and url's (build full URL's as we walk), get any non script/html text longer than x chars, add a database entry for that page, check each link gathered on page against a list of domains that we can't leave, discard the bads, follow the goods, and start over again. When we get lost or messup real bad, we die and start a child to pick up on the next link.


Parsing the pages and checking the domains is easy. So is the database portions, well, all of this is easy.

My question is, has this been done already, do you guys recommend I dev this in perl, or should I look elsewhere? What are your thoughts/blessings/jeers