I just finished reviewing mpeppler's slides to his Sybase Conference presentation, available here in Powerpoint format. Here are my thoughts:
HTML Template handling (2) The system I favor completely separates the HTML from the code.
Was it necessary to write one? Does it offer anything not found in HTML::Template or HTML::Seamstress?
Perl is a flexible language with a powerful modular structure and very strong text handling ability.
It certainly is, but this convenience and wealth of functionality in Perl has resulted in a decrease in ability to leverage the shell as well as a duplication of its functionality. Also, because so much is available within Perl, one tends to write modules to compentize complex apps instead of isolated shell scripts. The result is that large programs often have a huge memory overhead because modules cannot be unloaded after use. In contrast, Unix shell pipelines enter memory, do their job, then exit.
By forcing all access to the database through stored procedures you achieve: Data integrity
  1. By "data integrity" I think what you mean is there is one place where any alteration of the database can be found, namely the stored procedure packages.
  2. While I agree with this, my personal preference is for a flexible sqlmaker such as DBIx::Recordset, because I would hate to have to recode all of my stored procedures whenever tables change for whatever reason. For details, see the "Table Mutation" and "Scalability" sections in my DBIx::Recordset article.
  3. Finally, storing procedures is optimal from a speed standpoint but is it conceivable that the stored procedures could tax the RAM of the database server? I wrote SQL::Catalog with similar aims but then realized that it would suffer the same problem. The simplest thing is to name the queries and store each one in a file, e.g.:
/ecommerce/billing/monthly.sql /ecommerce/billing/yearly.sql /ecommerce/hosting/dnslookup.sql
and then simply prepare a statement handle by loading the text file into a lexical variable and save main memory:
my $sql = File->contents('/ecommerce/billing/yearly.sql'); my $sth = $dbh->prepare($sql); $sth->execute(@bind_var);
Table-Driven Database Calls (5) The perl subroutine has a configuration hash that defines all the inputs and outputs for the request. The configuration hash consists of several sections:

I saw nothing here via-a-vis password management. How would you handle different passwords for production and dev environments? Would your solution differ from DBIx::Connect? If so, why?

Carter's compass: I know I'm on the right track when by deleting something, I'm adding functionality

Replies are listed 'Best First'.
Re: Website Application Framework with Sybase, Apache & Perl: My Thoughts
by mpeppler (Vicar) on Dec 18, 2002 at 18:43 UTC
    Thanks for the comments!

    Was it necessary to write one? Does it offer anything not found in HTML::Template or HTML::Seamstress?
    No, of course not. However this is based on code that I have been working with for several years, and that I am familiar with. We used this sort of thing at eCircles - actually this is an evolution of the framework we had at eCircles.

    I believe I mention that other templating systems exist, and that they may in fact be better than the one I used here (and if I didn't mention it in the slides I know I talked about this during the presentation).

    To respond to your other points:

    • by data integrity I mean that any DML that needs to be run in a transaction (i.e. where more than one operation needs to be performed) are wrapped in a single stored procedure. This guarantees that the entire operation happens correctly, and also that a client that is hung doesn't hold up transaction flow on the server.
    • Personally I'm not really keen on things like DBIx::Recordset. Where access to the database is concerned I do not want things to automatically mutate when the database changes. I want the changes to be done in a controlled manner: the database schema shouldn't change that often (OK, so real life is sometimes not so clean...) and making the changes in the stored procs should be part of the changes done to the database
    • As for using more memory on the server - I don't believe so. The code presented here is 100% specific to Sybase database servers. It is my contention that having a large number of prepared statements (i.e. statements with placeholders) will use at least as much memory as the stored procs (actually I have had discussions with a Sybase technical consultant who mentioned running out of procedure cache on the server when using a lot of prepared statements with placeholders).
      In addition preparing multiple such statements in advance (with DBI and DBD::Sybase) will require each http child to have one database connection open per prepared statement - not a good thing.
    How would you handle different passwords for production and dev environments?
    The passwords are stored in the conpool.cfg file and are used by the Sybase::Apache::ConPool module directly. My solution differs from DBIx::Connect because I use my connection pooling module to share a few database connections between all the apache processes on the machine.
    Again I should emphasize that this is a Sybase specific solution, hence it does not use things like DBI at all.

    Michael

      I wrote:
      by data integrity I mean that any DML that needs to be run in a transaction (i.e. where more than one operation needs to be performed) are wrapped in a single stored procedure
      I should add that stored procs are also used to do access control. Access to the underlying tables is revoked from all but "privileged" users, forcing all access to be done through the stored procs. This allows the DBA/SQL developer to perform query optimization without affecting the client code, and it means that operations that need to be done in a single transaction will be done that way, because there is no way for normal users to access the tables directly.

      Michael

Re: Website Application Framework with Sybase, Apache & Perl: My Thoughts
by Aristotle (Chancellor) on Dec 18, 2002 at 20:47 UTC
    [..] large programs often have a huge memory overhead because modules cannot be unloaded after use. In contrast, Unix shell pipelines enter memory, do their job, then exit.

    I don't buy it. These issues are entirely orthogonal.

    If your memory use bothers you, you can either fork, require Foo::Bar, exit or write a Perl script that uses the module in question and pipe through it - depending on which of either is more convenient in the case at hand.

    You're making the same mistake as people who compare JSP with CGI scripts written in Perl and conclude that Java is better. What matters is the environment, not the language.

    Makeshifts last the longest.

      If your memory use bothers you, you can either fork, require Foo::Bar, exit or write a Perl script that uses the module in question and pipe through it - depending on which of either is more convenient in the case at hand.
      How does require result in memory savings over a used module? I don't think it does, as a use is converted into a BEGIN {require }

      Carter's compass: I know I'm on the right track when by deleting something, I'm adding functionality

        Yes, require happens at runtime - that's why I said fork first. Only the child loads the module to do the deed and then exits, so the parent process doesn't acquire any bloat.

        (Not that I'm entirely convinced that loading modules per se costs a lot of space - more likely it is accumulated data structures that cause the Perl process to grow exorbitantly. That's mainly a matter of scoping stuff closely, and otherwise being attentive.)

        Makeshifts last the longest.

        I think Aristotle's point is to load the module in a forked process so that it's dumped when that process exits.

Re: Website Application Framework with Sybase, Apache & Perl: My Thoughts
by djantzen (Priest) on Dec 18, 2002 at 19:15 UTC

    At the risk of intruding upon what looks like a personal communiqué to mpeppler...

    ...this convenience and wealth of functionality in Perl has resulted in a decrease in ability to leverage the shell as well as a duplication of its functionality. (emphasis mine)

    What functionality is no longer available such that one is no longer able to write code in this manner? Or do you mean that in the community of developers fewer people are skilled in shell scripting? What is the evidence for this, and why does it matter even if it is true? Why shouldn't one prefer to use Perl rather than an OS specific script?

    Also, because so much is available within Perl, one tends to write modules to compentize complex apps instead of isolated shell scripts.

    People componentize applications into modules because doing so significantly increases reusability and benefit from Perl's portability.

    I would hate to have to recode all of my stored procedures whenever tables change

    Changing your tables is neither a minor nor everyday activity. You must expect that changes at such a fundamental level will have cascading effects, and allocate proper time/resources to dealing with them; certainly anyway if you are attempting to utilize database optimizations like stored procedures (although IANASPE (stored procedures expert) :)

    is it conceivable that the stored procedures could tax the RAM of the database server?

    RAM is generally consumed by data structures, not procedures. I think you would require truly vast numbers of them to impact memory performance.

    In sum, if your database server is a Pentium 133 and you change your table schemas with unusual frequency in a software package guaranteed only to run on *nix, then I can see where you're coming from. Otherwise, I think you're overly concerned with bottlenecks and maintenance issues in the wrong places.

      Changing your tables is neither a minor nor everyday activity. You must expect that changes at such a fundamental level will have cascading effects...
      Exactly. DDL schema changes aren't a trivial operation - their effect should be assessed across the entire system (i.e. from the database to the client code). By using stored procedure some schema changes can be hidden from the client code.

      is it conceivable that the stored procedures could tax the RAM of the database server?
      RAM is generally consumed by data structures, not procedures. I think you would require truly vast numbers of them to impact memory performance.
      It should also be noted that the vast majority of Sybase sites run on pretty hefty hardware, and even those that run on linux or WinNT tend to have in excess of 1GB of RAM available to the database server (which doesn't mean that memory should be wasted, of course).

      Michael

        At a certain level of experience in Application design, I see two different "influences":
        1. Application Programmer
        2. DBA

        The application programmer likes to keep control in the application, putting DML and DDL in the application, and the DBA influence who likes to keep control in the database..

        I personally see the merits of both sides, and, depending on the strength of the DBA's at the site I'm working at, would prefer the DBA control method..

        I'm wondering what camp you're coming from? .. ;-)

      Why shouldn't one prefer to use Perl rather than an OS specific script?
      That's a good point. Just take the wisdom of Unix and put it in Perl so that people can use it on other OSes.

      Carter's compass: I know I'm on the right track when by deleting something, I'm adding functionality