|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
(tye)Re: is I/O checking worth it?
by tye (Sage) on Jan 14, 2001 at 00:15 UTC | |
First, don't bother trying to check file permissions. The only real way to check file permissions is when you try the actual operation. So check for failure and report a good error message. You might see advice to use access() to check permissions but that is a bit of a misleading API. It was never meant for checking generic file permissions. It was meant only for use by set-UID programs to make a rough guess about whether the file would be accessible by the user once set-UID was taken away. And it only gives a rough guess, ignoring lots of potential reasons for an access to fail or succeed. Second, the way you talk about this checking makes it sound like you are just going to generate a bunch of race conditions. For example, checking whether a file exists, say with -e, before attempting to overwrite causes a race condition. Instead, open the file in a way that will fail if the file exists then you can check if the file existing is what caused the open to fail and deal with it at that point. But if, for example, your desired behavior to avoid overwriting a file is to rename the current file before opening the new one, then you still need to open the new file in a way that will fail if the file already exists or you get a race condition again. The symlink trick only works when a privileged program (such as a set-UID program) keeps its privileges while working with files in a directory that it doesn't need its privileges for (such as /tmp). Because if you can use a symlink to redirect non-/tmp files, then you have privilege to just directly move the file and don't need a symlink trick. A much better solution than checking for symlinks is to have your set-UID program remove its privileges whenever it deals with places where it doesn't need them. I don't think checking for tricky things is usually a good idea. If your program isn't set-UID (or privileged in some other way), then these tricks really don't pose a security hole and may actually be used legitimately to work-around some temporary system problem. But even if there are no security issues to worry about, it is a good idea to avoid race conditions. - tye (but my friends call me "Tye") | [reply] |
|
Re (tilly) 1: is I/O checking worth it?
by tilly (Archbishop) on Jan 14, 2001 at 05:14 UTC | |
A good step regardless is to have every open test what you did. I believe in doing it like perlstyle says and having the error message include the filename, attempted operation, and $!. If you need to read and write files but don't want to follow symlinks, this can get fairly tricky. The following code (which will fail horribly on systems without symlinks) demonstrates how to do it safely: In general if you need temporary files, do not attempt to roll that yourself. Use File::Temp. Really. Also note that if you are concerned with security then you may want to think about locking. For an example (which could easily be improved) that I came up with a while ago see Simple Locking. With luck this should give you some ideas of how to improve the security of your programs. | [reply] [d/l] |
by Beatnik (Parson) on Jan 14, 2001 at 16:40 UTC | |
Locking will only work on processes that understand the concept. If applications don't obey file locking, they can do whatever they want with the files. Perl, ofcourse, obeys the locking. Not all OSs have flock implemented, good example: Windows (not that I use it). flock will actually break your script if it's run on a platform that doesn't support it. What about the file versus dir check? A file can be opened, a dir can't (in a file meaning). Will -d suffice enough? =) | [reply] |
by tilly (Archbishop) on Jan 14, 2001 at 23:31 UTC | |
As for the rest, generally it is a far sounder strategy to open in a non-destructive manner, then test. Testing first opens up race conditions. Beyond that putting in a ton of paranoid checks tends to create unmanageable messes. The harder you make security, the less likely it is to happen. Make it easy to be secure (eg through a small number of functions like I wrote above) and think about how it fits in your overall policy. (I might work as a non-privileged user in directory structures whose permissions are locked down to just that user, then leave it at that. If I want to put a symlink in there, that is probably OK.) In general make sure that things are sane, you have programmed in a way where unexpected inputs cannot be misunderstood, and make it simple to maintain that. But if (and without seeing what you do I have no idea whether this applies in your case) you set up a complex scheme that is supposed to be followed, you have set yourself up for failure. Complex schemes tend to erode security. | [reply] |
|
Re: is I/O checking worth it?
by moen (Hermit) on Jan 13, 2001 at 21:50 UTC | |
Second, Yes..I really think that one should, it's a matter of security and good programming practice. Using symlinking when compromising an machin is very common, yes, and easy if you locate scripts that don't check for symlinks and are run/executed as root (or any other user for that matter).
So messing around deleting, creating and modifying files on your system without checks whether it's sain or not, is just plain stupid.
| [reply] |