in reply to Re^9: Unable to connect
in thread Unable to connect

And these are listed in the Googlebot ranges.

With that list, it should be quite easy to block the bot automatically from using Super Search and other "expensive" pages. A cron job could mirror and import that list once per week or so, and a cheap check against that list could just return a 403 Permission Denied from the Super Search. And if other bots misbehave, they could easily be added to that list.

At least PostgreSQL allows comparing IP addresses against IP address ranges right in the database: https://www.postgresql.org/docs/current/functions-net.html

MariaDB has an INET6 type usable for IPv4 and IPv6 addresses (https://mariadb.com/kb/en/inet6/), but it seems to lack functions for handling netmasks and IP address ranges. You have to use bit operations for that.

MySQL doesn't even have the INET6 type, just functions for handling IP addresses: https://dev.mysql.com/doc/refman/9.2/en/miscellaneous-functions.html. Again, no support for netmask and address ranges, you need to use bit operations.

Alexander

--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

Replies are listed 'Best First'.
Re^11: Unable to connect
by hippo (Archbishop) on Mar 25, 2025 at 10:04 UTC

    For googlebot you don't need any of that. It at least self-identifies in the User-Agent string so you can simply block on that in the front end. It will also adhere to directives in robots.txt.

    It's the scummy LLM bots who masquerade as normal browsers and come from wide, unpublished IP ranges in a thundering, DDoSing herd who are the real problem these days. That's how Cloudflare have started to make tons of cash and that is another problem in and of itself, alas.


    🦛

      For googlebot you don't need any of that. It at least self-identifies in the User-Agent string so you can simply block on that in the front end. It will also adhere to directives in robots.txt.

      The last traces of "don't be evil" ... ;-)

      So, what do we tell friendly bots?

      https://perlmonks.org/robots.txt:

      # Please only spider https://www.perlmonks.org not https://perlmonks.o +rg User-agent: * Disallow: /

      "Go away."

      https://www.perlmonks.org/robots.txt:

      # Be kind. Wait between fetches longer than each fetch takes. User-agent: * Disallow: /bare/ Disallow: /mobile/ Crawl-Delay: 20

      "Don't touch /bare/ and /mobile/, and crawl slowly."

      Why do we allow bots to fetch Super Search and probably other "expensive" pages? Granted, Super Search is a form with its action set to POST, and bots should not send POST requests. But obviously, it does.

      It's the scummy LLM bots who masquerade as normal browsers and come from wide, unpublished IP ranges in a thundering, DDoSing herd who are the real problem these days.

      So we are back to rate limiting, and maybe requiring logins for "expensive" pages.

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)