in reply to Re^2: Google indexes Perlmonks
in thread Google indexes Perlmonks

Considering that it isn't hard to determine a request comes from a google webcrawler, and that it's already possible to reader pages with or without nodelets based on a user profile, it wouldn't be too hard to serve google nodelet free pages, would it?

Replies are listed 'Best First'.
Re^4: Google indexes Perlmonks
by demerphq (Chancellor) on Dec 26, 2004 at 13:23 UTC

    I think the problem is that its not unlikely that a web crawler ends up hammering the system following way way too many links generating zillions of unnecessary page fetches. For instance, the spider lands on the front page. That has links to the high activity sections along with a large number of root level nodes. So then it follows each section and each front paged node. Now each one of those nodes has links to itself. So it wont just index the root node and the thread below, but rather the whole thing and then each one singly. Quite possibly it will do this twice for the front paged nodes. Then it will also index each users home node, which of course leads to lists of nodes written by that author which I imagine will eventually result in google single handedly fetching pretty close to each and every node on the site. This is a load that we just dont need.

    Of course the CB and various other bits that we dont really want indexed is also a reason. But i should think the core reason is that our site isnt particularly amenable to automated crawlers. The whole point of the blakems static mirror is that it is static and updated rarely and at a low load threshold. Once its mirrored Google can search and index it as it likes, there wont be unnecessary load on our DB servers so we dont really care then.

    ---
    demerphq

      It would have to be a pretty stupid crawler to do that. Most of them don't follow links on the same site indefinitely. I know that the Googlebot eventually “gets bored” and I imagine all the major search engines follow the same principles.

      This isn't simple courtesy — the bot would be easy to trap otherwise. You could keep it treading water on a site indefinitely by leading it onto a script which generates self-links that aren't obviously such.

      I'm not really advocating anything in particular; I think thepen's static archive is fine for the job.

      But if we wanted to accomodate bots on the live site, it wouldn't at all be difficult or impose disproportionate traffic. F.ex, the bots could be instructed to only follow links from, but not index the content of section frontpages. On root nodes, they could be given a view with a plain unthreaded list of links to notes associated with the node but without the notes' text. On notes, they'd only see the text of the particular note visited — there wouldn't even be a query against the DB to look for replies. That should keep the load pretty moderate and would improve indexing too (you don't get bogus hits on nodes where the hit appeared in a reply).

      I don't know if the effort would be justified, but it is entirely feasible.

      Makeshifts last the longest.

        I think my explanation fits rather well with the:

        # sorry, but misbehaved robots have ruined it for all of you.

        And sure good bots probably are smart like you said, im just saying that even if it was a naive depth first search of minimal depth, its still going to put a heavy load on the server.

        ---
        demerphq

Re^4: Google indexes Perlmonks
by Aristotle (Chancellor) on Dec 24, 2004 at 15:44 UTC

    No, but would it also be less work than thepen? Maybe. Maybe not.

    Makeshifts last the longest.