Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

Multi tiered web applications in Perl

by pernod (Chaplain)
on Oct 21, 2003 at 13:00 UTC ( [id://300894]=perlquestion: print w/replies, xml ) Need Help??

pernod has asked for the wisdom of the Perl Monks concerning the following question:

Greetings,

One of the tasks here at $firm is developing a cgi-application. It is a rather simple thing using CGI.pm, some database connections and scary amounts of print-statements. Organic growth of the application has shown that this approach is (unsurprisingly) not good enough.

We are researching different solutions, and one of them is CGI::Application, supported by HTML::Template and CGI::Session. Testing will be conducted with Test::More and screen-scraping with WWW::Mechanize. This is all well and good, but we are also inspecting other avenues.

As $firm is quite enamoured with .NET, this is an obvious choice that offer advantages and disadvantages that will not be detailed in this forum. Another alternative would be J2EE, or even some CMS system or other.

From this rather long-winded introduction comes my question. In the name of buzzword-compliance, can Perl offer something akin to the multi-tiered architectures that J2EE and .NET market so heavily?

I want to say that separating business logic and database access into modules that implement CGI::Application, and presentation with HTML::Template is already (logically) a three-tiered architecture. Is it possible, or even interesting to divide these into physical tiers? Placing the presentation logic on one server and the CGI::Application stuff on another?

I could imagine some solutions, but they would be home-rolled and communicate on sockets and such. From a quality assurance viewpoint (another buzzword here at $firm), this is clearly not desirable. We simply do not have the resources to re-implement an enterprise server communication system in Perl, even though it could be quite an interesting challenge ...

Performing a last CPAN search before posting this dug up Oak::Application, which shows some promise. It is somewhat poorly documented though, so I'm not sure if this is a viable alternative. It seems to be a framework for muli-tiered applications, does anyone have any experience with this module?

Any enlightenment will be greatly appreciated. Thank you for reading this.

pernod
--
Mischief. Mayhem. Soap.

Replies are listed 'Best First'.
"The First Rule of Distributed Objects is..."
by perrin (Chancellor) on Oct 21, 2003 at 16:42 UTC
    "... don't distribute your objects!" -- Martin Fowler, author of the book "Refactoring."

    Can you use multiple physical tiers? Yes. You can use XML-RPC or SOAP to do it, although neither of those really offer object-oriented access (but neither does EJB, when used with popular approaches like "session facade"). You could probably get better performance by using straight HTTP and Storable for serialization, but then you'd have to write it yourself.

    More importantly, should you? As you already realized, there is no need to have multiple physical tiers just to get the abstraction benefits. Adding physical tiers will add tons of overhead in the form of RPC calls. Don't let anyone tell you that this overhead is insignifcant. On a fast platform like mod_perl, this sort of RPC stuff is likely to be your biggest bottleneck. It could easilly cut your performance in half, or worse, and would probably force you to change your nice fine-grained OO interfaces into ugly corase-grained ones just to reduce the number of calls it takes to do something (J2EE people do this all the time with the aforementioned session facade stuff).

    So why would anyone do this? The standard response is "scalability." The theory here is that it will be helpful to be able to add more machines to your application server tier without adding any to your web servers, because your application servers need the extra power and your web servers don't. If you have both of these in a single physical tier, you can't separate them like that. When you think about it though, what exactly is the problem with your web servers getting more power than they need? Having these in the same process makes you use fewer resources (because of the missing RPC overhead), not more. I don't think you could reasonably argue that it takes more machines to do the same work because they are together. It will probably take fewer machines. You also get to use standard approaches for load-balancing, like hardware HTTP load-balancers.

    Some people will say "What about when you have really uneven business objects, and a few of them take 10 times as long to run as others? That will mess up your HTTP load-balancing." Well, most load-balancers can keep track of how many requests they have open to each web server and use that to send requests to the ones that have the lightest load, but it is possible that you could have a component that did some really heavy calculation and sucked up lots of CPU, making it hard to load-balance because it is so unlike the other requests. You can deal with this either by using asynchronous processing (a queue), or by dedicating some specific servers to handle these requests. This avoids penalizing every single request with RPC overhead. It's easy to route requests based on URL when you are using a front-end proxy or a good load-balancer.

    The real joke here is that the people who shout about this loudest are typically the EJB vendors, and they are selling snake oil. EJB performance is so bad when remote calls are involved that the EJB containers were all written to make the calls local whenever they can. That's right: the remote calls are actually turned into locals calls into the same Java process whenever possible. In EJB 2, an explicit local interface was added so that developers can force this behavior on the few EJB containers that weren't already doing it!

    There is one thing that is good to separate, and that's your static web content. No need to tie up a big mod_perl process serving your JPEGs. There are several ways you can handle this, discussed here.

    You might also be interested in the article I wrote about how we used multiple logical tiers and in some cases physical tiers (for separate search servers) at eToys.com. That article is here.

      So why would anyone do this? The standard response is "scalability." The theory here is that it will be helpful to be able to add more machines to your application server tier without adding any to your web servers, because your application servers need the extra power and your web servers don't. If you have both of these in a single physical tier, you can't separate them like that. When you think about it though, what exactly is the problem with your web servers getting more power than they need? Having these in the same process makes you use fewer resources (because of the missing RPC overhead), not more. I don't think you could reasonably argue that it takes more machines to do the same work because they are together. It will probably take fewer machines. You also get to use standard approaches for load-balancing, like hardware HTTP load-balancers.

      I'll try and refute this. Imagine a site that keeps it simple and they need to keep the fast parts to stay fast. Well, things that take up more speed need more resources. Yes, throwing more hardware, complete webservers, does solve something, but in the long run, the entire site will slow down again as it gets more popular.

      So fine, now you create a seperate farm for one part of your site. Tons of links to change over etc etc..

      It'd a big pain in the ass. So why not keep the web interface running uber fast, the actual display layer, but move the bottle necks to a seperate area. This of course, as we both know, is how n-tiered would start to evolve. The advantage is, that if I hit one interface that is dog slow, it may not mean the others will slow down. That might mean tweaking the aplication layer or db layer.

      As for your RPC argument, you are right, but then again, the same problems happen in PHP or Perl. That's why you have connection pools and the likes. You keep everything on high speed connections, and things will run well. You and i both know that sending 1k of data over RJ45 is very quick to move that type of stuff over. And we all know you don't do repeditive rpc calls to get all your data.. .just one fell-swoop with one biz objects.

      Also, doing things via proxy server would be fast, but as the scenario gets more complex with a url proxy, things slow down in a way you can't speed back up. Load balancing a bunch of proxy servers is just ugly. 'cause then you have something that is just as bad as a regualr web farm. As things slow down, the entire service slows down and not just the parts that are costly.

        Imagine a site that keeps it simple and they need to the fast parts to stay fast. Well, things that take up more speed need more resources. Yes, throwing more hardware, complete webservers, does solve something, but in the long run, the entire site will slow down again as it gets more popular.

        I'm not sure what your point is here. Yes, scaling typically involves adding hardware as well as tuning your code. What does this have to do with physical tiers vs. logical tiers?

        So fine, now you create a seperate farm for one part of your site. Tons of links to change over etc etc..

        First, you only separate things if they are so uneven in terms of the resources they require that normal load-balancing (treating all requests for dynamic pages equally) doesn't work. This is very rare. Second, you NEVER change links! There is no reason to change your outside URLs just because the internal ones changed. If you want to change where you route the requests internally, use your load-balancer or mod_rewrite/mod_proxy to do it.

        So why not keep the web interface running uber fast, the actual display layer, but move the bottle necks to a seperate area.

        How does this help anything? The amount of work is still the same, except you have now added some extra RPC overhead.

        The advantage is, that if I hit one interface that is dog slow, it may not mean the others will slow down. That might mean tweaking the aplication layer or db layer.

        Can you explain what you're talking about here? Are you saying that some of your requests will not actually need to use the db layer or application layer?

        You keep everything on high speed connections, and things will run well. You and i both know that sending 1k of data over RJ45 is very quick to move that type of stuff over.

        Running well is a relative term. They will not run anywhere near as fast as they would without the RPC overhead. I'm not making this stuff up; this is coming from real applications in multiple languages that I have profiled over the years. Socket communication is fast, but it's much slower than most of the other things we do in a program that don't require inter-process communication.

        And we all know you don't do repeditive rpc calls to get all your data.. .just one fell-swoop with one biz objects.

        And that's the other problem: the RPC overhead forces you to replace a clean OO interface of fine-grained methods with "one fell-swoop..." This is one of Fowler's biggest complaints about distrubuted designs.

        Also, doing things via proxy server would be fast, but as the scenario gets more complex with a url proxy, things slow down in a way you can't speed back up.

        I'm not sure where you're getting this from. mod_rewrite (the most common choice for doing things based on URL) is very fast, and the hardware load-balancers are very very fast.

        Load balancing a bunch of proxy servers is just ugly. 'cause then you have something that is just as bad as a regualr web farm. As things slow down, the entire service slows down and not just the parts that are costly.

        What's so bad about a web farm? Every large site requires multiple web servers. And which parts are costly?

        I think you are imagining that it would take less hardware to achieve the same level of service if you could restrict what is running on each box so that some only run "business objects" while others run templating and form parsing code. I don't see any evidence to support this idea. If the load is distributed pretty evenly among the machines, I would expect the same amount of throughput, except that with the separate physical tiers you are adding additional load in the form of RPC overhead.

        Think of it like this: you have a request that takes 10 resource units to handle -- 2 for the display and 8 for the business logic. You have 2 servers that can each do 500 resource units per second for a total of 1000. If you don't split the display and business logic across multiple servers and you distribute the load evenly across them, you will be able to handle 100 requests per second on these servers. If you split things up so that one server handles display and the other handles business logic, you will max out the business logic one at 62 requests per second (496 units on that one box). So you buy another server to add to your business logic pool and now you can handle 125 requests per second, but your display logic box is only half utilized, and if you had left these all together and split them evenly across three boxes you could have been handling 150 at this point. And this doesn't even take the RPC overhead into account!

        Distributed objects sound really cool, and there is a place for remote cross-language protocols like SOAP and XML-RPC, but the scalability argument is a red herring kept alive by vendors.

      Wow, mod parent up, up, up. I was going to post my $.02 (perl pun) but you certainly said it better than I.

      In my $firm, we have a cluster of webservers and that allows scalability (as you described above) rather than having another tire of hardware (other than the Db) for the business logic. This works very well for us even for very high load.

      -------------------------------------
      Nothing is too wonderful to be true
      -- Michael Faraday

Re: Multi tiered web applications in Perl
by dragonchild (Archbishop) on Oct 21, 2003 at 14:27 UTC
    There are basically two reasons to go with a .Net or J2EE solution
    1. You have expertise in those architectures that you do not have in the 4GL languages, like Perl or Python.
    2. You are already seriously committed to a Win32 architecture.
    3. You already have a large codebase in those architectures.

    For a completely new project, it often makes more sense to build it in Perl/PHP. Reasons?

    1. Perl is developed faster
    2. 90% of every Perl application has already been written, tested, and deployed.
    3. Perl is supported on (nearly) every single system known to man
    4. Perl has support for (nearly) every single data store known to man

    As for the multi-tiered stuff ... I worked on an e-commerce site that had all the MVC architecture built. Oracle was on its own server, separate from the Apache servers. It was all in Perl.

    I've also worked on another web app that had 4 front-end servers, using MySQL for session information. They communicated using Tuxedo across the DMZ to an Oracle server (with HA backup). All the SQL was in Pro*C in a bunch of C++ classes.

    The point here is that Perl is a very effective language for every part of the web app. I've heard of apps that have their engine in Perl and use Java/.Net for the presentation layer. No problem!

    ------
    We are the carpenters and bricklayers of the Information Age.

    The idea is a little like C++ templates, except not quite so brain-meltingly complicated. -- TheDamian, Exegesis 6

    ... strings and arrays will suffice. As they are easily available as native data types in any sane language, ... - blokhead, speaking on evolutionary algorithms

    Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.

      4GL languages, like Perl or Python

      Without commenting on the rest of your post, how do you classify Perl and Python as 4GLs? They're not application specific, they have conditional and looping operators, and they don't require a database.

        4GL => 4th generation languages. I've often heard Perl described as a fourth generation language. Of course, I should've looked up the definition first.

        Often abbreviated 4GL, fourth-generation languages are programming languages closer to human languages than typical high-level programming languages. Most 4GLs are used to access databases. For example, a typical 4GL command is FIND ALL RECORDS WHERE NAME IS "SMITH"

        But, the definition can apply to Perl, in that the language is much closer to English than C or Java. For example,

        foreach $item (@list_of_stuff) { next if $item eq "Something bad"; do_stuff($item); }

        And, from what I've read about Perl6, this will be even more the case. My feeling is that any language that provides the following is a candidate for being a 4th-generation lenguage.

        • Abstracts away physical hardware (where possible), like memory management
        • Provide higher-order constructs as first-order variables (like hashes)
        • Allows for flows that closely map to human thought processes

        But, as always, this is just my opinion.

        ------
        We are the carpenters and bricklayers of the Information Age.

        The idea is a little like C++ templates, except not quite so brain-meltingly complicated. -- TheDamian, Exegesis 6

        ... strings and arrays will suffice. As they are easily available as native data types in any sane language, ... - blokhead, speaking on evolutionary algorithms

        Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.

      90% of every Perl application has already been written, tested, and deployed.
      That sounds really appealing, but I certainly haven't found it to be the case. Possibly if you are writing things such as CGI scripts that do mailforms, but I'm curious. Do other folks who write serious Perl applications for their day jobs agree with this statement?

      -- dug
        In the application I'm working on now, there is much more CPAN code (Class::DBI, HTML::Template, CGI::Application, DBI, Data::FormValidator, SOAP::Lite, XML::Parser, etc.) than code written by me. The code written by me is the actual logic of the application, i.e. the part that couldn't possibly come from another source since it is specific to this project.
        I do write serious Perl applications for my day job. A few examples of apps I've heard about:
        • An application that will take inputs in nearly every format known to man (including screen scrapes of vt100 terminals), munge the input according to arbitrarily-defined rules, then output it in any number of outputs, defined at run-time?
        • An application that securely handles over M$100 in seed orders yearly.
        • E-Toys, the most-hit website behind E-bay and Amazon.

        Would those qualify as serious web apps?

        ------
        We are the carpenters and bricklayers of the Information Age.

        The idea is a little like C++ templates, except not quite so brain-meltingly complicated. -- TheDamian, Exegesis 6

        ... strings and arrays will suffice. As they are easily available as native data types in any sane language, ... - blokhead, speaking on evolutionary algorithms

        Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.

        The code I produce in Perl could easily be 10% of what I would produce doing everything in C without access to a huge library like CPAN.
Re: Multi tiered web applications in Perl
by jdtoronto (Prior) on Oct 21, 2003 at 13:47 UTC
    I am sure that at some point you will hear from perrin who has a pretty serious background at the highest levels of web application. After discussions here on exactly the same theme I have recently settled on the model you first propose.

    Separating the application and presentation may not be as beneficial as you might think, but the more I get into coding my application the more impressed I am by the ability of even a quite modest Apache server with mod_perl, using CGI::Application and HTML::Template. You might, however, gain something by implementing an additional layer to separate business logic and application logic. I have seen this done in one system by having the business logic implemented in SOAP clients and the application logic requesting the data from another machine. If course, this can be implemented on a single machine for development and testing.

    This write-up is an excellent example of just what can be achieved. There has been a lot of discussion in the monastery over the last few months, you might want to try a supersearch on various keywoords like "web application" and see what you can find.

    Good luck!

    jdtoronto

Re: Multi tiered web applications in Perl
by adrianh (Chancellor) on Oct 21, 2003 at 13:19 UTC
    I want to say that separating business logic and database access into modules that implement CGI::Application, and presentation with HTML::Template is already (logically) a three-tiered architecture.

    This is true, and worth emphasising. A tiered architecture can be implemented in any language.

    Is it possible, or even interesting to divide these into physical tiers? Placing the presentation logic on one server and the CGI::Application stuff on another?

    It's certainly possibly. However in many instances it's more trouble than it's worth - since you add a lot of overhead in communicating between the different physical layers.

    There's a recent onjava.com article that discusses this (in the context of PHP but it applies equally to Perl).

    What Perl doesn't have is a single commonly accepted way of handling separation of physical tiers. But, with TMTOWTDI, why would you expect to have only one way :-)

    For example, it's trivial to create stand alone application servers with POE, RPC::XML or SOAP::Lite. The amount of work involved is tiny. You certainly don't have to drop down to the socket-coding level.

      I think mod_perl would be a better base for an application server (using XML-RPC, SOAP, or whatever) than POE. It will typically have better performance.
        It will typically have better performance

        True, mod_perl is pretty hard to beat if it meets your needs. I was just illustrating that there was, as they say, more than one way of doing it :-)

        That said, I've been surprised at how well POE performs.

        A few months back I needed a server to talk to some legacy applications as part of a web app. mod_perl wasn't really suitable (the legacy app wasn't HTTP based, had a stateful protocol, stupidly long connection times, stupidly small number of allowed connections) so I threw it together with POE.

        I was fully expecting to have to rewrite in C once I had the prototype up and running. However, straight POE was more than fast enough. One more reason to like Perl :-)

Re: Multi tiered web applications in Perl
by ctilmes (Vicar) on Oct 21, 2003 at 13:14 UTC
    As long as you are researching different solutions, one you might take a look at is HTML::Mason

    Implemented properly, you can make it fit the sort of model you are aiming for.

      I did take a look at HTML::Mason, and was a bit put off by the intermixing of HTML and Perl. One of our goals is to separate logic and presentation. After a (somewhat superficial) perusal of the documentation, it looks like Mason couples these rather tightly.

      I may be very wrong though. Thank you for your suggestion, I will look further into this :o)

      pernod
      --
      Mischief. Mayhem. Soap.

        You don't have to intermix HTML and Perl -- you just can when you need/want to.

        You can also separate your application into some Mason documents that present data and others (or better yet, non-Mason Perl Modules) that hold the business logic and interact with your database.

        Mason provides syntactic sugar to control the interface between the two and by allowing perl in your presentation layer, you have a great deal of power/flexibility in generating HTML.

        Mason also allows nice modularity of various presentation elements and combining them in various ways to produce user interfaces.

        One of our goals is to separate logic and presentation
        You prolly dont want j2ee and that horible jsp stuff then.. ;-)
      Implemented properly, you can make it fit the sort of model you are aiming for.

      I didn't think Mason offered any direct support for separating business objects into a separate physical layer. I thought it all lived in the Apache process.

      Am I missing something?

        I didn't think Mason offered any direct support for separating business objects into a separate physical layer. I thought it all lived in the Apache process.

        Am I missing something?

        Yes. HTML::Mason is independent of apache. Sure it has great support to run on apache under mod_perl or cgi, but it doesn't require apache or any webserver.
Re: Multi tiered web applications in Perl
by TVSET (Chaplain) on Oct 21, 2003 at 14:24 UTC
    Last time we had a sync with our buzzword experts in the office, usage of XML::RPC qualified the application for the "multi-tier" term. Check that out. :)
Re: Multi tiered web applications in Perl
by EvdB (Deacon) on Oct 21, 2003 at 16:02 UTC
    You say that:
    It is a rather simple thing using CGI.pm, some database connections and scary amounts of print-statements.

    If you have a simple thing to start with then keep it that way. Only go for the complications if you really need them. It is an excellent idea to pull out all those print statements and stick them in a template, and CGI::Application is a great starting point.

    It may not be what you had in mind but I tend to add another 'layer' between CGI::Application and the actual module that generates the page - usually called something like MyApp::Base. In this I then add all those functions that crop up like connecting to the database, getting the user or redirecting to a login page (using cgiapp_init or cgiapp_prerun) etc.

    This can generate some really compact and easy to test code - which can also be run on everything from a cut down CGI only server through to a full mod_perl optimised one. This last point is a real clincher - it is great having a super-dooper speedy custom setup until the server dies and you need it running now on that server there, yes the one we used as a doorstop.

    --tidiness is the memory loss of environmental mnemonics

Re: Multi tiered web applications in Perl
by Anonymous Monk on Oct 22, 2003 at 09:31 UTC
    Ok, let me start by being brutally honest. The vast majority of development efforts cannot seperate their logic and presentation. There are a number of reasons for this:

    1. Developers cannot think ahead well enough to create the right abstractions. This may be because of the developers, or because the spec is changing too fast.
    2. The presentation and the logic are bound in some manner. You can tell you have this problem when you're trying to implement a programming language in your presentation layer
    3. The logic is nearly non-existant. You can tell this is happening when your web site is looking remarkably like your data model and you're doing things like taking the results of queries and dumping them straight to XML followed by a simple translation or something like that.

    If you are in the rare group that is not qualified by one of these situations, THEN you need to start worrying about logic and presentation seperation.

    If you recognise one of the above, GIVE UP NOW. You will save yourself a LOT of pain no matter what platform you're developing under. There are other processes to manage complexity and performance which are much more appropriate, find them and use them, seperation of display and logic is a tool, not a holy law.

    Assuming that you actually need to seperate logic and presentation, sit down with a bit of paper and think about why. Let me give you a hint, if you're honest about this, and reason #1 comes down to "because j2ee and .net can do it and management thinks its good", FIND a CLUESTICK, BEAT your MANAGERS until they understand that architecture is the job of an architect, not the job of the architects manager.

    If you have used a different reason for #1, it is probably some mush containing the words performance and scalability. I hear tell, in certain circles, that it makes much more sense to scale by just buying another box, that programming for performance is a waste of valuable programmer time and effort that could be better spent getting the application out the door.

    Let me point out right now, that there are almost no web applications on the planet, that cannot have another box supplied, session affinity turned on at the accelerator, and simply work. There are a very few conditions you need to have in order to make this work:

    1. A web accelerator that supports session affinity. They're not rare.
    2. A web application that uses some kind of network-connectable database backend
    3. A web application that doesn't do anything on the local filesystem or tricky stuff with shared memory.
    4. An SQL database that can replicate

    2 & 3 are MOST web applications in the target area (that we haven't alreayd eliminated above). If you're designing from scratch, these conditions are a piece of cake to code within. Thats it people. No J2EE, no serialized objects wandering around on the network, no XML and no remote procedure calls.

    So using the circles reasoning above, why would you waste your programmers precious time on an inferior platform that supports a useless feature? You don't need it, it doesn't get you anything, and you can write the code in half the time in perl.

    Are you still reading? do you have another argument in your head? then let me explain the last level of web application. This is the level whereby you have specialised acceleration requirements. Perhaps you have a low-cpu-cost logic section which needs to be tightly coupled, and a losely coupled, high CPU cost presentation end. For example, imagine that the logic section involved calculating collisions between 4000 box models, and the presentation section rendered them. The 4000 boxes, each moved by an individual client, benefit from tight coupling, being on one machine. The rendering however, is per client and CPU expensive, so we want to farm this.

    Yay! we have a good solid argument for the seperation of logic and presentation using a remote invocation system.

    Anyone doing the above? raise your hands...hrm, nobody? listen, I've been doing these things for years, and only ONCE did we build a logic system. You know what it was built in? C, with shared memory, and synchronisation via broadcast UDP. Because it was hellishly performance intensive. No RPC crap would have saved us there, we had a real problem, not a make-believe one.

    Laziness people, one of the magic rules, don't make things more complicated than they need to be, most of the time you're just making work for yourself.

      While I agree with much of this, I think there is a lot of value in logical (not phsyical) separation of the presentation code (templates, XSL, some PDF writing module, etc.) and the domain objects (where the actual logic of the application goes). Keeping them separate doesn't help performance or scalability, but it does help with maintenance, making it much easier to change things later on. Tangling up the SQL with the HTML-generation and the application logic makes it really hard to do general changes, like adding caching to all your database calls.
        $dbh->prepare_cached() ???
Re: Multi tiered web applications in Perl
by CountZero (Bishop) on Oct 21, 2003 at 19:08 UTC
    IMHO, the only clean way of fully separating content/business logic and presentation is by having your Perl-scripts provide XML-data (this is the content/business logic part) and then transforming this XML into HTML (this is thr presentation part) by using XSLT and CSS. This can be done server-side (think of Axkit) or client-side (e.g. IE 6). I find it much better than a templating system, which always feels either a bit restricted or too complicated, but never entirely right.

    And of course XMS, XSLT, ... are hip buzzwords and mucho liked by PHB.

    CountZero

    "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law

      IMHO, the only clean way of fully separating content/business logic and presentation is by having your Perl-scripts provide XML-data

      Much as I love XML in some circumstances this is just nonsense :-)

      XML is no better (or worse) than any other transport mechanism. You can misuse XML just the same way you can misuse any other transport mechanism.

        Well mankind has a great talent to misuse anything.

        CountZero

        "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law

Maximum Respect and Gratitude to the Illustrious Monastery
by pernod (Chaplain) on Oct 22, 2003 at 10:34 UTC

    I would like to thank everyone who has participated in this discussion. Your input is greatly valued, and it has given me many interesting pointers to different solutions.

    I would especially like to thank perrin and sporty for their educational dialogue, and the Anonymous Monk for his detailed, no-nonsense reply.

    Humbly,

    pernod
    --
    Mischief. Mayhem. Soap.

p5ee?
by smackdab (Pilgrim) on Oct 22, 2003 at 02:57 UTC
    Have you seen: http://p5ee.perl.org/

    Don't know anything about it other than it *sounds* like j2ee ;-)
Re: Multi tiered web applications in Perl
by abarilla (Initiate) on Oct 22, 2003 at 14:38 UTC
    Relative to Perl, I'm an amateur so I don't have anything substantial to add, but how about a new marketing campaign. .PerlEE - Dot Perl Enterprise Edition now with improved TPS reporting

      P5EE, while not a marketing campaign per se, has similar goals to the ones you outline.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://300894]
Approved by Limbic~Region
Front-paged by krisahoch
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others rifling through the Monastery: (7)
As of 2024-03-28 14:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found