in reply to perl regex or module that identifies bots/crawlers

What about encrypting the originating IP address + timestamp and put it at the end of the URL for all valid navigation. Bots and crawlers won't be able to obey your protocol. Your cgis can run a routine (you would write) to decrypt and check the relevant part of whatever URL they were run with and abusers can be autodetected that way.

As for retaliation, immediate blocking is just too nice. Having set a daily quota for an IP address to bareback your cgis, better to redirect exceeders to the website of your least favourite government organisation - that way they can go spin each other ;) But block above some other threshold you can now afford. Careful selection of redirect (e.g. to an ftp variant of some gov. agency URL) will keep the bot busy with the other 'enemy' machine (whatever you define that to be) before it reruns against your URLs and you can therefore afford a higher threshold for blocking rather than redirect under those circumstances. You could also use this middle threshold to log the activity as has already been suggested for a general response.

-M

Free your mind

  • Comment on Re: perl regex or module that identifies bots/crawlers