in reply to Abstracting sql

It's already been discussed several times around here. Personally I find that building a module that fetches data from table(s) such as you propose with get_data([qw/foo bar baz/]...) works for simple situations, but will rapidly break when you start having a complex database where multiple tables need to be accessed for any particular request.

Therefore I much prefer creating logical requests that map to perl subroutines (i.e. addNewUser(), getUserInformation(), etc) that either call SQL stored procedures (my preference) or, if these aren't available, have the SQL in the subroutine.

This will also have the advantage of forcing you to structure your app a little so that you only need to create a limited number of database access methods/subroutines. This will have a huge benefit on the maintainability of the database, and on figuring out what indexes are needed (f.ex. right now one of my clients has a largeish database (about 20GB) where there is very little structure - trying to figure out what indexes are needed, and what queries are actually run is a nightmare, requiring the use of "auditing" (i.e. recodring *all* SQL that is sent to the server) and sifting through gigabytes of SQL code...)

Michael

Replies are listed 'Best First'.
Re: Re: Abstracting sql
by barbie (Deacon) on Jul 31, 2003 at 09:02 UTC
    I tend to use a combination of the packages/subroutines with business logic functionality, a database wrapper and an SQL phrasebook.

    By using Class::Phrasebook, I have successfully transfered from a CSV mockup, through a MySQL implementation to MS SQL Server 7 (and stored procedures), by only changing the phrasebook. The functionality and wrapper haven't changed, just the database. Having the SQL wrapped up in the code, for me, has the same argument as wrapping HTML up Perl or visa versa. It can get ugly and awkward to maintain.

    Using routines that implement the business logic is always a good thing. However the DB wrapper should be the one to handle all the connections, extract the SQL from the phrasebook, structure the data and handle any errors, so that if anything goes wrong, it's only one point of failure not potentially several hundred. My DB wrapper also handles the auditing required, and obtaining the ID for inserted records. To continually repeat this for each module containing business logic seems unnecessary.

    I've heard good things about Class::DBI, however, it does seem to be more aimed at SQL based queries not stored procedures.

    --
    Barbie | Birmingham Perl Mongers | http://birmingham.pm.org/

      You might want to compare Class::Phrasebook to SQL::Catalog. I would be interested in hearing your feedback.

      Carter's compass: I know I'm on the right track when by deleting something, I'm adding functionality

Re: Re: Abstracting sql
by BUU (Prior) on Jul 31, 2003 at 13:46 UTC
    Well, I'm all for being logical in my requests and highly structured code. Unfortunately I can't used stored procedures as maybe fifty percent of the databases out there implement them. As to your other solution, I really like the idea of using data functions to map from perl to sql. Unfortunately it looks like using your examples would give me the worst of both worlds, as I would still end up with sql being strewn around in the middle of my code and I see no easy way to easily be "database agnostic" with those subs, other then just writing one set of subs for every single database I want to use.

    I suppose in the end I'm probably going to have to do something similar, with different modules containing functions for each different database, but the idea of having large numbers of subroutines that would essentially be exactly the same, for example, postgre subs and mysql subs would probably use almost the exact same sql and so forth, but I couldn't just use the same subroutines because of the one sub where I have to use different sql for whatever reason.

    I suppose that writing individual classes for each database and combining it with some sort of factory method would be the simplest way to handle it, perhaps something like:
    my $db_type = defined $use_mysql?'mysql':'postgre'; # really simplifie +d my $user = new User($db_type); $user->name; $user->date; $user->stuff;
      I really like the idea of using data functions to map from perl to sql. Unfortunately it looks like using your examples would give me the worst of both worlds, as I would still end up with sql being strewn around in the middle of my code and I see no easy way to easily be "database agnostic" with those subs, other then just writing one set of subs for every single database I want to use.
      Using these data functions you at least limit the SQL to a specific module (or set of modules), and this set of modules could quite easily be a hierarchy that takes database specificities into account.

      Another possibility is to store the SQL in a separate file (text file, DB_File or something else) indexed by logical request. Load the file at startup and map perl subroutines to SQL statements.

      There are always a bunch of ways to do these things... :-)

      Michael

        Heh, actually on further thought, I decided that a classic OOP method would be the best solution. I could define the basic/default "data access class" with all the fun little members like get_user_data; or whatever I decide I need, then every database specific implementation can simply inherit from the default class and I can easily over ride any method that needs "custom sql", or even override the entire class entirely (say If I wanted to use flat files or something..)