in reply to Re: Logfile analysis and automatic firewalling
in thread Logfile analysis and automatic firewalling
I tend to agree with hardburn, actively tracking the IPs in a hash-of-arrays is probably much more efficient than storing every single error in a DB and then calculating error-rates after the fact.
Perhaps a sampling mechanism could be employed. Pick a period (5 minutes for instance). When you see an error on a given IP, you store the IP, the time you saw it, and increment the error counter. Now as long as that IP's error counter is less than 5 minutes old, you continue adding subsequent errors to that counter. When the counter is 5 minutes old, you check the error count, decide if the error-rate for 5 minutes is exceeded and decide to allow or ban that IP. Then you remove that hash entry and start again with that IP.
When you decide to ban IPs, you can create another hash with IPs and the time they were banned. This hash is used to generate the list of IPs to ban. Once their ban is done (30 minutes) you can drop them from this hash and the program can remove the iptables rules.
You can dump these hashes to files at a given interval (or via a signal handler so your iptables generating script can log on, issue a kill with a given signal and expect your script to spit out the list of banned IPs).
|
|---|