leocharre has asked for the wisdom of the Perl Monks concerning the following question:

I have a web application I am considering revisiting. I am concerned that one of the points in the security of the application may be overkill. This application runs on linux/apache.

The application runs through ssl only.

Each user can access a part of the filesystem hierarchy, top down from the points they can access. Kind of like in the /home/user setup- only the hierarchies are much deeper.

The system recognizes that each user has these points they can access top down within the filesystem hierarchy.

What I was trying to protect from by encrypting the requested paths: A user who has been validated could be an attacker. If they can read /home/user/books but not /home/user, if they requested /home/user, they are denied because the system registers this user cannot read that. Because the information to *make* the request for /home/user is already a string encrypted by the server with unique per session key- the possible attacker can't even *make* the request for /home/user, because their request will decrypt to garble at the server.

I am already running this via ssl only, and checking that the users indeed can read what they ask for- I am wondering if I should not be encrypting and decrypting the request data. If this is an added step that just slows everything down (halfway slower).

I'm sorry for the long story- I can't figure out how to shorten it. Any opinions? I would greatly appreciate it.

  • Comment on In a web app, is using ssl, encrypting request data, and validating request data after decryption overkill?

Replies are listed 'Best First'.
Re: In a web app, is using ssl, encrypting request data, and validating request data after decryption overkill?
by Old_Gray_Bear (Bishop) on Jul 02, 2007 at 15:54 UTC
    The short answer is "No". There is no such thing as 'too much paranoia'.

    "If you make it 'idiot proof', the Universe will develop a better Idiot" -- The Darwinian Rule of Software Development.

    Seriously, it is very hard to go over-board in checking what an unknown (and possibly malicious) User has sent you. Bear in mind that from time to time new attack vectors appear and encryption methods are compromised. Having your suspenders buttoned on tight as well as buckling your belt can be the difference between sleeping the night through and the O'Dark Hundred phone call ....

    ----
    I Go Back to Sleep, Now.

    OGB

      Seconded. Users are dirty, untrustworthy creatures who should all be rounded up and shot (but I guess they'd have a hard time using your app; then again, that'd fix any load problems . . . hrmm, tough call :).

      Never trust anything provided by a user. If at all possible, don't even send information you're going to use to them to begin with. Send them a "token" (actually the word I'd use here normally would be "cookie", but considering the context that might be overloading the term a bit much; it might be very well keyed off of an HTTP cookie, or it could be something in the URL), then use your copy of the information keyed by that token instead (after verifying that they match up with that token via whatever authentication you're doing, of course). They see a display copy of the information, but you work from your version and validate / scrub / cleanse any changes you receive from the user before updating your version.

Re: In a web app, is using ssl, encrypting request data, and validating request data after decryption overkill?
by blokhead (Monsignor) on Jul 02, 2007 at 16:19 UTC
    If the app only runs via SSL, then what is the purpose of further encrypting things? Encryption is used when you want to prevent certain parties from reading messages. So who are you trying to keep from seeing the request data? An eavesdropper on the network? The SSL layer already prevents this. The client? They already know the request data, since they requested it. The server? It gets to decrypt it anyway to process the request.

    The only other thing I can think of is that you're worried about cache files on the client's computer revealing the request data. But presumably (correct me if I'm wrong), the responses from the server already include ample information related to the request (i.e, if I request "/home/user/books", the response will be a nice page with a big header that says "/home/user/books").

    Having two seemingly identical layers of encryption sounds pretty redundant to me. Especially when one is at a lower level in the protocol stack, standardized, and well-studied by cryptographers (I have no idea what encryption you are proposing to use on top of the SSL). I don't think this is a case of having two different security mechanisms (suspenders and a belt, using Old_Gray_Bear's terminology) -- it's like using another little tiny belt to hold the buckle onto your first belt! ;)

    I'm sorry for the long story- I can't figure out how to shorten it.
    Shorten it by using fewer words, not by using <small> tags, though I didn't think it was very long to begin with. ;) I could barely read the stuff in the smaller font on my browser.

    blokhead

      I apologize about the small font, my mistake.

      All they have once authenticated is cookie with a session id. How the cookie is changed in any way has no impact on the server's response other then "log in again". So, I am not concerned about the data residing on the client. Once the data is retrieved, it is out of my hands. If they are using a sh1++y browser, a kamikazi os or the user is just plain stupid- the buck stops there.

      My responsibility is that no sensitive data that the user does not have access to, leaves the network in the first place.

      SSL is to address packet interception/snooping, indeed. However, there is another case for considering further encryption, one in which a genuine user with genuine rights to be inside the system, could be a potential attacker- maybe they guessed a user and password and were able to do so whilst having to deal with a captcha.

      As that user authentication is now inside, they are now limited to read certain places. How they start off by choosing any place that they can read, is by selecting from a premade list of relative paths which the server encrypted using Crypt::CBC DES cipher.

      Yes, the user client may see what the path is with their eyes, but that is not the manner in which the server understands a request for a path.

      The key for encrypting is created when the session is made, and stored on the server. Thus, path requests for one session are not valid for another session and translate to garble when they reach the server.

      Once the request reaches the server for a point in the filesystem hierarchy- it is decrypted, checked that this is something that this user would indeed have read access to, then it is checked for existence etc..

      This prevents a user client from requesting a resource which was not originally offered by the server.

      If you are a valid user and break the encryption so you may generate valid requests, you still have to deal with the fact that the server will check that you really can request to read this place on the filesystem hierarchy.

      It appears to me that it is possible that my *human* instincts think that preventing the client user from making a valid request in the first place is important indeed.

      Where my *rational* side realizes that the most important part here is that when a request is present that it is ok to allow it, or not.

      This form of captcha can be broken by software. I'm sure you know that with your background.

      The encryption key I am using is short, 4 characters. That can possibly be broken ( sessions cannot age more then 45 minutes, they have to just log back in, maybe that is enough time to crack it).

      The request paths are normalized to an absolute location on disk with Cwd::abs_path(), so ../.././ kind of junk gets resolved- this is to help check that they can read the place requested.

      I would think that having all these things together might help make it a little bit harder and a little more annoying to actually get through the web side of things.

      (Please forgive me if I come off as patronizing or redundant- I am trying to be clear.)

        So you want to prevent logged-in users from making requests for things they aren't allowed to see. You mentioned in the OP that the server already verifies the permissions of each request, so what more is there to protect against?

        I mentioned in my reply that the (only) purpose of encryption is to achieve data secrecy. Your proposed application of encryption addresses a data validation problem, not a secrecy problem. Let me explain..

        This prevents a user client from requesting a resource which was not originally offered by the server.
        It sounds like you have a mental model of the server handing out tokens for certain kinds of requests. To make a request, the client just sends back one of its tokens, with the implicit security assumption that only the tokens that were generated by the server (and for that particular user) should be accepted. Again, this is not a secrecy problem but a validation problem.

        Crypto tools like digital signatures and MACs (not encryption schemes) are designed for validating the source of data. But even those are overkill here. In this case, the person who validates is the same as the person generates the data. So you don't need crypto at all -- to validate something, just check whether it was something you previously generated. This is effectively what you do by only giving out "tokens" for things with ok permissions, and checking the same permissions on request "tokens" you get back.

        Anyway, from what you have described, it sounds like you have the right paranoid mindset about servicing user-generated requests, so that's good! Better safe than sorry, but in this case I don't think that tacking on more encryption will really help much in the way you intend.

        blokhead

        It appears to me that it is possible that my *human* instincts think that preventing the client user from making a valid request in the first place is important indeed.

        You seem to already be questioning the results of your human instincts, since you posted the question here in the first place. I think you are right to question them, but you may have missed the most important question, which only you can answer:

        If the client requests access to something they have no rights to and the server is set up to detect this and deny the request, what harm is done by them asking?

        In most cases, the answer is "none", so your encrypted token scheme, by preventing invalid requests, prevents no harm and is pointless overhead. But your case may be an exception to that generality.

Re: In a web app, is using ssl, encrypting request data, and validating request data after decryption overkill?
by clinton (Priest) on Jul 03, 2007 at 08:24 UTC
    Don't forget that the "directory structure" that you present to your user may or may not exist. You may show them a tree like /home/user/books, but the actual implementation may be that the data is stored in the database, and your path is really a category, rather than a real path.

    The important part is this: check whether the user has the authority to see whatever they have requested.

    Make sure that everything is protected, but the more "super security" that you build in, the more chance there will be for bugs to creep in, and to reduce the actual security.

    Clint