Would it be an idea to have a robots.txt file for perlmonks?

If, for instance, you were to go to google and search for "crazyinsomniac" (first name that came to mind) you get hundreds of crap results because google's been through while he's been on the talker, and because all the different nodes that link to a page containing "crazyinsomniac" have a different "lastnode_id=???" on the link to that same page.

It'll cut the load on the server when a (good) bot comes by if nothing else.

Replies are listed 'Best First'.
Re: Why no robots.txt?
by Masem (Monsignor) on Aug 03, 2001 at 15:01 UTC
    I believe that we've discussed the indexing of PM, and most agree that there's no problem with having search engines go through the site. While there is a DisplayType Raw that's available that simply drops all the nodelets on the page, it's hard to force a robot to use those pages while still having normal visitors have nodelets available.

    In other words, it's a situation that really has no easy way to be rectified.

    -----------------------------------------------------
    Dr. Michael K. Neylon - mneylon-pm@masemware.com || "You've left the lens cap of your mind on again, Pinky" - The Brain

      An idea could be checking the user agent. If it's a bot (maybe even an automated client such as wget), PM could log it in as a specific user. This user would have the relevant nodelets disabled in the user preferences. Other setting such as max depth could be useful to alter too.

      Then the spider could happily start its crawl on the site, free of much noise (for the spider, for its lots of fun for us). Or it could be redirected to a relevant section.

      Sure, checking the user agent (when a user is not logged in) might be expensive, but most pages require extensive database usage anyway. Probably the cost of checking the user agent against a certain set of known bots (easy to obtain and maintain from Web server logs) is compesated by much less nodelets requests afterwards.

      Maybe it can be done on the homepage only but, since most pages are implemented through index.pl, it may be worth checking every time. That is, every time a request comes from a client which is not logged in.

      -- TMTOWTDI