in reply to Re^8: Unable to connect
in thread Unable to connect

You wrote:

I just did some log file messin', and found that there have been a huge number of hits on the site from the address range 66.249.64. to 66.249.79.

And these are listed in the Googlebot ranges. Therefore, the evidence does suggest that the "huge number of hits" are down to the googlebot crawler.


🦛

Replies are listed 'Best First'.
Re^10: Unable to connect
by afoken (Chancellor) on Mar 25, 2025 at 09:39 UTC
    And these are listed in the Googlebot ranges.

    With that list, it should be quite easy to block the bot automatically from using Super Search and other "expensive" pages. A cron job could mirror and import that list once per week or so, and a cheap check against that list could just return a 403 Permission Denied from the Super Search. And if other bots misbehave, they could easily be added to that list.

    At least PostgreSQL allows comparing IP addresses against IP address ranges right in the database: https://www.postgresql.org/docs/current/functions-net.html

    MariaDB has an INET6 type usable for IPv4 and IPv6 addresses (https://mariadb.com/kb/en/inet6/), but it seems to lack functions for handling netmasks and IP address ranges. You have to use bit operations for that.

    MySQL doesn't even have the INET6 type, just functions for handling IP addresses: https://dev.mysql.com/doc/refman/9.2/en/miscellaneous-functions.html. Again, no support for netmask and address ranges, you need to use bit operations.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

      For googlebot you don't need any of that. It at least self-identifies in the User-Agent string so you can simply block on that in the front end. It will also adhere to directives in robots.txt.

      It's the scummy LLM bots who masquerade as normal browsers and come from wide, unpublished IP ranges in a thundering, DDoSing herd who are the real problem these days. That's how Cloudflare have started to make tons of cash and that is another problem in and of itself, alas.


      🦛

        For googlebot you don't need any of that. It at least self-identifies in the User-Agent string so you can simply block on that in the front end. It will also adhere to directives in robots.txt.

        The last traces of "don't be evil" ... ;-)

        So, what do we tell friendly bots?

        https://perlmonks.org/robots.txt:

        # Please only spider https://www.perlmonks.org not https://perlmonks.o +rg User-agent: * Disallow: /

        "Go away."

        https://www.perlmonks.org/robots.txt:

        # Be kind. Wait between fetches longer than each fetch takes. User-agent: * Disallow: /bare/ Disallow: /mobile/ Crawl-Delay: 20

        "Don't touch /bare/ and /mobile/, and crawl slowly."

        Why do we allow bots to fetch Super Search and probably other "expensive" pages? Granted, Super Search is a form with its action set to POST, and bots should not send POST requests. But obviously, it does.

        It's the scummy LLM bots who masquerade as normal browsers and come from wide, unpublished IP ranges in a thundering, DDoSing herd who are the real problem these days.

        So we are back to rate limiting, and maybe requiring logins for "expensive" pages.

        Alexander

        --
        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re^10: Unable to connect
by jdporter (Paladin) on Mar 25, 2025 at 16:40 UTC

    Thanks. I didn't know about that. So monitoring the user-agent strings, some interesting things pop out:

    1. Go-http-client/1.1
    2. Mozilla/5.0 (compatible; AhrefsBot/7.0; +http://ahrefs.com/robot/)
    3. Mozilla/5.0 (compatible; DotBot/1.2; +https://opensiteexplorer.org/dotbot; help@moz.com)
    4. Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot)
    5. Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36
    6. Mozilla/5.0 (Linux; Android 5.0) AppleWebKit/537.36 (KHTML, like Gecko) Mobile Safari/537.36 (compatible; Bytespider; spider-feedback@bytedance.com)
    7. Mozilla/5.0 (Linux; Android 5.0) AppleWebKit/537.36 (KHTML, like Gecko) Mobile Safari/537.36 (compatible; TikTokSpider; ttspider-feedback@tiktok.com)
    8. Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.6998.165 Mobile Safari/537.36 (compatible; GoogleOther)
    9. Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.6998.165 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

    Some clearly say "bot" (or, in a couple cases, "spider"), which is nice. What I'm a little concerned about is #8, "GoogleOther". What does that mean? It's also coming from 66.249.*, just like the "Googlebot" ones. We don't want to keep Google from indexing the site, we just don't want it to hit Super Search (too often).

    A bunch of different ones occur in the log (which, btw, goes back to Sept. 2021)...

    AhrefsBot Amazonbot AntBot Applebot BLEXBot Bytespider CCBot ClaudeBot DataForSeoBot DotBot GPTBot GoogleOther Googlebot PetalBot SPIDER SemrushBot SEOkicks SpiderLing TikTokSpider YandexBot bingbot