in reply to Re: Why no robots.txt?
in thread Why no robots.txt?
An idea could be checking the user agent. If it's a bot (maybe even an automated client such as wget), PM could log it in as a specific user. This user would have the relevant nodelets disabled in the user preferences. Other setting such as max depth could be useful to alter too.
Then the spider could happily start its crawl on the site, free of much noise (for the spider, for its lots of fun for us). Or it could be redirected to a relevant section.
Sure, checking the user agent (when a user is not logged in) might be expensive, but most pages require extensive database usage anyway. Probably the cost of checking the user agent against a certain set of known bots (easy to obtain and maintain from Web server logs) is compesated by much less nodelets requests afterwards.
Maybe it can be done on the homepage only but, since most pages are implemented through index.pl, it may be worth checking every time. That is, every time a request comes from a client which is not logged in.
-- TMTOWTDI
|
|---|