I ask this because I happen to know that perrin has direct experience with that level of volume, and has years of experience with a number of high-volume sites (admittedly most not peaking at over a million page views per hour) at companies with a variety of different technology mixes. I also know for a fact that your arguments about application servers are standard advertising copy from the vendors of application servers, and that doesn't necessarily match the experience on the ground. This flavours my reaction to what perrin has to say.
Since I have raised the question of qualifications, let me be honest about my own. I don't have a lot of high-volume website experience. What I mostly have is enough math and theory to do back-of-the-envelope calculations on scalability and latency. And it is obvious to me that adding extra stages has to increase latency, CPU and internal network traffic, all of which at high volume show up in eventual hardware costs and the user experience. (Enough hardware requires more employees as well...) Plus users often judge you more on latency than throughput. Throughput you can buy hardware to cover, but latency is not something that you can ever get back once you lose it. (That is a lesson that I learned early, which is not generally appreciated nearly as much as I wish it was.)
In reply to Re: Re: Re: Re: "The First Rule of Distributed Objects is..."
by tilly
in thread Multi tiered web applications in Perl
by pernod
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |