Is it illegal or unethical for such a scraper to ignore robots.txt?
First, I'd affirm strongly that illegal is definetely not equivalent to unethical. Actually, the new federal rule which demands that web spiders obey robots.txt may be legal, but it seems unethical to me.
robots.txt as I understand it isn't in any manner an access control system. Declaring it so in a legal manner and enforcing it is plain nonsense from a sick justice going mad.
The RFC defining the robots.txt standard ( robots.txt RFC ) states it very clearly:
It is solely up to the visiting robot to consult this information and act accordingly. Blocking parts of the Web site regardless of a robot's compliance with this method are outside the scope of this memo.
Regarding your own personal web spider, I'd say : who may ever knows that you sucked up a site with it? How can someone prove that you didn't hit "ctrtl+S" in your browser while visiting the site? How can someone may forbid you to save a personal backup copy of a publicly available document? This doesn't make sense. Republishing content like google cache or archive.org do may be questionable, but you're definetely allowed to store an unmodified copy of a web site for your personal use, unless you're living in Iran or China.
In reply to Re: [OT] Ethical and Legal Screen Scraping
by wazoox
in thread [OT] Ethical and Legal Screen Scraping
by eyepopslikeamosquito
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |