leocharre has asked for the wisdom of the Perl Monks concerning the following question:

I have a web app that has user accounts, each user is granted access to do certain things with certain files. They can upload files, they can "delete" (just moved to a trash dir) files, rename certain files on disk, etc.

So far I have SSL, I have a CAPTCHA mechanism for login, CGI::Verify for checking tainted data (of course, -wT) - Everything is perl cgi. Everytime any action runs (by request of remote client), all the data is checked for integrity, your CGI::Session is checked for time, etc. Every file you request to do something to, is checked to make sure it is within your scope of granted access, that you can do with that file what you are requesting the server to do. These things are checked *before* anything is sent back to the client agent, before anything is actually done on the server- that the user requested.

The next thing I want to implement, is an IP deny mechanism. If someone or some"thing" is trying to login too many times, if a valid user is trying to request files they do not have access to.. Then I want to block them, period.

I looked in CPAN for something to work with this, and I don't think I found it. So.. Here is my plan for your thoughts.

I am thinking I want to log each undesirable action as a 'warning'- recorder in a database or perhaps simply in the filesystem. These warning flags will renew each day. So...

Everytime a remote client attempts to log in but the credentials fail, a warning file is made. Such as /tmp/xxx.xxx.xxx.xxx.warning_type.unix_timestamp.

If the remote client makes a request for a file or an action they do not have rights to, (not only does the process fail, also..) maybe 5 warning files are made. If the action is more severe, the more warning files are made.

Everytime a warning file is made, the /tmp (or whatever.. /tmp/warnings, some other place..) is checked for how many warning files are present. If more then x files are present for that ip (/tmp/xxx.xxx.xxx.xxx.*) then I make an entry into an .htaccess file. A deny from xxx.xxx.xxxx.xxx rule, maybe with a comment for when it was put in and why.

I am thinking of making the module so a small script could use it, this way any other languages, or other kinds of sites could use it simply by calling it, for example if user logs in you send them to the "i'm sorry" page and maybe an ssi directive calls the perl script with the remote address of the client agent.

Sounds useful? Stupid? Been done? Where? Would this seem useful to anyone? Please comment.

  • Comment on A module to deny ip on multiple sketchy http requests, yes, no?

Replies are listed 'Best First'.
Re: A module to deny ip on multiple sketchy http requests, yes, no?
by perrin (Chancellor) on May 25, 2006 at 16:33 UTC
Re: A module to deny ip on multiple sketchy http requests, yes, no?
by ruzam (Curate) on May 25, 2006 at 16:23 UTC
    I don't like mucking with .htaccess files, but maybe that's just me.

    If you have the means to track and count warnings, then you also have the means to restrict access within the script. Why not simply return a 'forbidden' page from the script?

    I've kind of been doing the same thing with one of my web apps. Failed login attempts are recorded in a table keyed by the client address (found in $ENV{'REMOTE_ADDR'}). Once a failure is recorded, any following attempts to login from the same client return an 'unauthorized' message (the failure records expire after 10 seconds).

      Yes I aggree, I want to offer that as well, an option to use the script to deny. I just think.. I want to use as much as possible of existing technology an not re-invent the wheel.

      Apache has a deny ip rule- which really blocks the client at the base. Using that, even if cgi were to stop working, the client could still not access via http. It is a self-protect mechanism.

      Think of the module as being a buttler who answers the door. You knock on the house, door opens, and buttler appears and says "whatup?!" - If we simply use the buttler, he may say "sorry, no.. you can't come in. We don't like you." .. So maybe you come back with a shotgun..
      But if we let the buttler do more.. Maybe he rigged the house to know who you are.. When you knock .. The door won't even open, you won't even see the buttler, much less the house.

      There is a slight difference here- maybe.. I am actually foreseeing cracking attempts on these machines. They *will* happen, the attempts. And they are not going to be using a browser to punch in data.

      If I do what you did.. I could program a dictionary attack - perhaps.. that will still work. Since after 10 seconds, the whole thing is forgiven?

      Anyhow, my concern is that I know this software will undergo attacks.

        In my case the 10 sec delay acceptable. This limits a dictionary attack to 6 times a minute instead of how ever may times the webserver can respond in the same minute. Actually, I guess the webserver will still be responding as fast as it can, only 1 crack attempt will be allowed through every 10 seconds, the rest will be thrown away. If the script can be served 100 times a second, then only 1 in 1000 crack attempts will actually be tried. The cracker will work his way through the entire dictionary with only a tiny percent actually making it through. (granted, it only takes one success to mess you up)

        But that's just my code, which is still a work in progress. You could easily keep count the number of attempts made during the 10 second timeout and permanently add the address to another table or extend the timeout indefinitely if a threshold is reached (like say more than 2 attempts). The 10 second rule doesn't have to be temporary, it's just an arbitrary timeout.

        Others rely on Apache to provide the security and I guess that's ok for them. Myself, I've never been keen on leaving it to the webserver. If your cgi application needs security, build it into the cgi. You never know (or may not have control over) when the server configuration changes or file permissions may change, and then all your .htaccess rules go out the window. But like I say, that's just me.
Re: A module to deny ip on multiple sketchy http requests, yes, no?
by arkturuz (Curate) on May 25, 2006 at 16:29 UTC
    Rules you mention, based on IP address are not practical - IP adresses change. Rules based on cookies are not practical if user deletes them and creates new account in your app. Also, logging some user action is better done with some SQL database.
    User will always try to find some way to circumvent your rules. Never trust user.
Re: A module to deny ip on multiple sketchy http requests, yes, no?
by TedPride (Priest) on May 25, 2006 at 21:04 UTC
    Make a table with (a) IP or IP range (might want to have a 2 or 3-character hash index to speed up lookup) (b) most recent access time (c) number of requests made within x seconds of most recent access time. If the number of (c) goes over a certain limit, ban them using .htaccess. Also run a cron job every few hours that reduces the (c) count by 1 every hour or something, so legit users who access too fast every now and then won't get auto-banned.

    If the user is logged in, you can of course do this with their user name instead of IP.