in reply to Re^11: Unable to connect
in thread Unable to connect
For googlebot you don't need any of that. It at least self-identifies in the User-Agent string so you can simply block on that in the front end. It will also adhere to directives in robots.txt.
The last traces of "don't be evil" ... ;-)
So, what do we tell friendly bots?
https://perlmonks.org/robots.txt:
# Please only spider https://www.perlmonks.org not https://perlmonks.o +rg User-agent: * Disallow: /
"Go away."
https://www.perlmonks.org/robots.txt:
# Be kind. Wait between fetches longer than each fetch takes. User-agent: * Disallow: /bare/ Disallow: /mobile/ Crawl-Delay: 20
"Don't touch /bare/ and /mobile/, and crawl slowly."
Why do we allow bots to fetch Super Search and probably other "expensive" pages? Granted, Super Search is a form with its action set to POST, and bots should not send POST requests. But obviously, it does.
It's the scummy LLM bots who masquerade as normal browsers and come from wide, unpublished IP ranges in a thundering, DDoSing herd who are the real problem these days.
So we are back to rate limiting, and maybe requiring logins for "expensive" pages.
Alexander
|
---|