in reply to Re^2: blocking site scrapers
in thread blocking site scrapers

You could start to build a second database (or add a field in the present one) that would include IP numbers that requested robots.txt, or that identified themselves as Googlebot, SurveyBot, Yahoo!, ysearch, sohu-search, msnbot, RufusBot, netcraft.com, MMCrawler, Teoma, ConveraMultimediaCrawler, and whatever else seems to be reputable.

My main criterion for a bot being OK is if it asks for robots.txt. However, this isn't 100% reliable. There's a bot out there that uses robots.txt to scrape only the forbidden directories and pages, ignoring the allowed ones. It's called WebVulnScan or WebVulnCrawl. That's just plain rude.

But just a thought - if a search bot is burning your bandwidth, isn't that still something you'd want to avoid?

Replies are listed 'Best First'.
Re^4: blocking site scrapers
by Anonymous Monk on Feb 07, 2006 at 14:00 UTC
    Hi.

    Seeing if they checked for a robots.txt file sounds like a great idea but how would I know whether they did or not?

      Well, the request they send would contain that text. For example, they'd say ""GET /robots.txt...". Their request usually contains other information, such as the IP they're using (or claiming to use), the name of the browser or user agent, and so on. A user agent might show up as "Mozilla/2.0(compatible; Ask Jeeves/Teoma;+http://sp.ask.com/docs/about/tech_crawling.html)". This is a polite bot that contains an address where you can get more information about it.

      Of course, someone could fake most of that (maybe all of it), but they usually don't. And anyway, even if it's MotherTeresaBot, if it's hogging your bandwidth, it's still causing you problems.