in reply to Re: Re: Re: Re: Re: Re: "The First Rule of Distributed Objects is..."
in thread Multi tiered web applications in Perl
Finally, you seem to see the primary value of a distributed architecture as the ability to isolate individual sections of the app. You are talking about fairly large sections, like entire pages, so I think this is separate from the original question of whether or not the presentation layer and application layer should be on the same machine. I agree that there are uses for this, but I still think they only apply when you are willing to let a certain section of your application perform badly as long as another section performs well. I don't see how your statement that "some parts will require resources beyond that of others" applies to this. Of course they will, and at that point you can either improve your overall capacity to handle it, or isolate the part that is performing badly and let it continue to perform badly while the rest of the application is fast. I'll give an example of a use for this. ...Well think of it like this. In CS, you can use a divide and conquor type of architecture right? That's how merge/quick sort work. It's also how many other things work, like matrix multiplication. If you can optimize the heavy parts, everything gets quicker. Same reason you use profilers. Point is, by keeping the heavy parts completely isolated from the quicker parts and paying attn to those heavy parts, things will always run fast. If those heavy parts get bogged down again, the quick parts stay quick. That is the big part of keeping everything seperated out, loosly coupled, in one complete architecture. By having things so tight knit, one part CAN slow down the other, and you have to pay attn to the whole.
Note that if you then go and add more machines to the slow package tracking cluster to fully handle the load, I would consider the isolation pointless. You could have simply left things all together and added those machines to the general cluster, with the exact same result.Ah, but measuring need becomes difficult. Adding one machine may making things 5% faster over all.. but if you need that one thing that is slow to become faster, you can improve its speed greatly. But sometiems slower performance doesn't matter so much. Think of say, like reports. No.. not reports. I'm not talking about sophisticated reports. Say.. all messages you've posted to perlmonks. It's ok if it's a little slow since it's a once in a blue moon opperation. It may take a bit of time and resources, but you know what.. that may be ok. And if you want, you can easily redirect stuff by saying what operation goes to who internally, w/o putting up new sysadminny type stuff.
There are some other benefits to the mod_proxy approach (caching, serving static files) but for just isolating a particular set of URLs to specific groups of machines you would probably be better off doing it with a hardware load-balancer.Totally agree on you, but putting some stuff on static pages isn't always an option. And load balancers do solve part of the problem, but not the total problem.
But you know, it is true. Adding ONE web server to a system that is at 101% capacity solves the problem. The whole splitting things up is great for large systems. Large systems that have large apis.. prolly something you wouldn't do in mod_perl but in more business directed languages, like java or even cobol :)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re(8): "The First Rule of Distributed Objects is..."
by perrin (Chancellor) on Oct 23, 2003 at 20:18 UTC | |
by exussum0 (Vicar) on Oct 28, 2003 at 02:17 UTC | |
by perrin (Chancellor) on Oct 28, 2003 at 20:51 UTC |