Re^2: LAMP svrs - 1 or 2 is best ?
by j_c (Novice) on May 10, 2005 at 14:36 UTC
|
we are aiming for a high availability cluster.
Even though the network stack creates a bottleneck between servers, at 1gbit it should be small, but the benefit is redundancy - a server can fall over and we can continue service on another server. But the rub is - how to quantitively state what performance loss or gain there is in the n-tier setup.
I can see that you can increase the number of concurrent users, but at what cost in the slow down across the network between db server and web server.
| [reply] |
|
|
You do understand that the cost of a high-availability cluster is extremely high. For every 9 you add, you increase the cost 10x. So, if it costs $10,000/year to provide 99% uptime, it will cost $100,000/year to provide 99.9% uptime. And, so on.
Those values don't mean very much, so let's use some useful numbers. There are 86,400 seconds in a day. Assuming exactly 365 days in a year, you have 31_536_000 seconds in a year. 99% uptime means you are allowed to be down for 315_360 seconds. That's 3.65 days, or 1h 41m every week. (If your maintenance window is 2 hours every Sunday night, you just blew 99% uptime.) 99.9% uptime is .365 days, or roughly 8h 45m, downtime. 5-9's, or 99.999%, uptime, means that you're allowed to be unavailable for 315.6 seconds, or 5m 15.6s, in a year. That's maintenance, backup, and everything.
I hope your company plans on making a lot of money with this site cause your administration costs for high-availability are going to be a bitch.
- In general, if you think something isn't in Perl, try it out, because it usually is. :-)
- "What is the sound of Perl? Is it not the sound of a wall that people have stopped banging their heads against?"
| [reply] |
|
|
so do you have some stats for comparison between using 1 and n servers ? in terms of performance? would I be able to serve 1000 users at 20 secs per page, or 100 users at 1 second per page ? If I buy a car I like to know how much better it is going to perform than my current one, I already know it will cost me more to run because it's a bigger car...
| [reply] |
|
|
|
|
Re^2: LAMP svrs - 1 or 2 is best ?
by eXile (Priest) on May 10, 2005 at 16:00 UTC
|
Performance isn't the issue - it's security. If you have your webserver outside your firewall and your database inside your firewall, then you can regulate exactly who gets access to the database server. It's that simple.
I don't understand how 2 servers is more secure than 1 server, I tend to think it's the opposite, 2 servers means 2x as much chance of errors in configuration, and I don't think having webserver and database on different machines gives you security you can't achieve with one machine (you can firewall a databaseserver on the same machine in the same way as you can firewall it if it's on a different machine).
I think the biggest advantage of having one service per machine is that you can tune them specifically and independently for the service, so add 'more iron' (ie. RAM, CPU,faster disks) if the performance of one of the services is below satisfactory levels.
Update: I see gellyfish has already made the point I make in my last paragraph (see his comments below)
| [reply] |
|
|
Assume you have a hardware firewall. This means that to cross the FW threshold, you have to be authorized in some fashion or another. The webserver is, generally, put into the DMZ outside this firewall. You're still going to lock down the ports, chroot the webserver and do all that stuff. But, it has to be outside the firewall so that the outside world knows how to get to it.
The DB server is inside the FW. It doesn't have an outside-accessible name. The webserver, because it's physically connected to the FW, has access to the internal DNS, so it knows how to find the DB server.
Basically, it's an additional layer protecting the only thing that's important - the data. You can't just hack the DB server - you have to hack the webserver to hack the DB, and even then, you only have the access the web application has.
- In general, if you think something isn't in Perl, try it out, because it usually is. :-)
- "What is the sound of Perl? Is it not the sound of a wall that people have stopped banging their heads against?"
| [reply] |
|
|
While I agree that having db and webserver on a different machine can provide an additional layer of protection (for instance against an attack where the root-account on your webserver is compromised), I want to warn against a 'put a firewall inbetween and your safe'-view. I think this is too simplistic. Firewalls are often seen as a magic box that makes your network safe. To increase your network security stuff like good intrusion detection (both on network and on a local machine), good backup/recovery procedures and common sense are as important, if not more important.
There are tons of exploits in web/database apps, and commonerrors programmers make (not using placeholders while using DBI for instance) that use the webserver-to-database channel to get to the database, no firewall will help you here, as you state yourself.
I highly recommend reading Bruce Schneiers 'Secrets and Lies' for a good holistic view of security. Especially the part on attack trees (building a tree of the most likely way a hacker will attack you) is very interesting.
mmm, we're deviating a lot from the OP question, I'll stop muttering.
| [reply] |
|
|