I apologize about the small font, my mistake.
All they have once authenticated is cookie with a session id. How the cookie is changed in any way has no impact on the server's response other then "log in again". So, I am not concerned about the data residing on the client. Once the data is retrieved, it is out of my hands. If they are using a sh1++y browser, a kamikazi os or the user is just plain stupid- the buck stops there.
My responsibility is that no sensitive data that the user does not have access to, leaves the network in the first place.
SSL is to address packet interception/snooping, indeed. However, there is another case for considering further encryption, one in which a genuine user with genuine rights to be inside the system, could be a potential attacker- maybe they guessed a user and password and were able to do so whilst having to deal with a captcha.
As that user authentication is now inside, they are now limited to read certain places. How they start off by choosing any place that they can read, is by selecting from a premade list of relative paths which the server encrypted using Crypt::CBC DES cipher.
Yes, the user client may see what the path is with their eyes, but that is not the manner in which the server understands a request for a path.
The key for encrypting is created when the session is made, and stored on the server. Thus, path requests for one session are not valid for another session and translate to garble when they reach the server.
Once the request reaches the server for a point in the filesystem hierarchy- it is decrypted, checked that this is something that this user would indeed have read access to, then it is checked for existence etc..
This prevents a user client from requesting a resource which was not originally offered by the server.
If you are a valid user and break the encryption so you may generate valid requests, you still have to deal with the fact that the server will check that you really can request to read this place on the filesystem hierarchy.
It appears to me that it is possible that my *human* instincts think that preventing the client user from making a valid request in the first place is important indeed.
Where my *rational* side realizes that the most important part here is that when a request is present that it is ok to allow it, or not.
This form of captcha can be broken by software. I'm sure you know that with your background.
The encryption key I am using is short, 4 characters. That can possibly be broken ( sessions cannot age more then 45 minutes, they have to just log back in, maybe that is enough time to crack it).
The request paths are normalized to an absolute location on disk with Cwd::abs_path(), so ../.././ kind of junk gets resolved- this is to help check that they can read the place requested.
I would think that having all these things together might help make it a little bit harder and a little more annoying to actually get through the web side of things.
(Please forgive me if I come off as patronizing or redundant- I am trying to be clear.)
In reply to Re^2: In a web app, is using ssl, encrypting request data, and validating request data after decryption overkill?
by leocharre
in thread In a web app, is using ssl, encrypting request data, and validating request data after decryption overkill?
by leocharre
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |