sozinsky has asked for the wisdom of the Perl Monks concerning the following question:

Hi! I'm working on a little project to represent database tables as simple classes. For example, if you had a table with the following definition ( in sql )
create table my_database..foo ( bar int , baz varchar(10) )
Then you would have a simple class my_database::foo with members int and baz. I'd like to solve this by having a Table package which handles the creation of the my_database::foo package for you. IE as a user you'd simply
use Table ( { -dbase => "my_database", -table => "foo" } ); my $f = my_database::foo->new( { bar => 1, baz => "neat" } );
</code> The Table class's import method() will query the local database my_database for the columns of the foo table, and then (I use Class::MethodMaker ) construct the class defintion of my_database_foo ( at _compile_ time!). So, specifically,
#Table's import method sub import { my ( $class, @args ) = @_; my $href = $args[0]; #sanity checking on href... my ( $dbase, $table ) = ( $args->{-dbase}, $args->{table} ); #get_columns returns a reference to an array #containing the columns. my $columns = get_columns( $dbase, $table ); #this is where I get stuck. package $dbase::$table; @ISA = qw(Table); use Class::MethodMaker; make methods get_set => $columns, new_hash_init => 'init'; #broquaint suggested this, but he didn't like it #"this is just so very very wrong" --broquaint #the idea is to export the $dbase::$table package #definition to the caller *{"main::$dbase::$table::"} = *{"$dbase::table"}; }
Perhaps another way to solve this would be to subclass the Table package , passing it the database and table name. Perhaps this is better, because ( this is another question ) would I ever be allowed to do something like this?
use Table ( { -dbase => "mydb", -table => "foo" } ); use Table ( { -dbae => "someotherdb", -table => "bar" } ); #results in the exporting of 2 class definitions: #mydb::foo and someotherdb::bar
as opposed to
package mydb::foo; @ISA = qw(Table); package someotherdb::bar; @ISA = qw(Table);
Anyhow, hope this isn't too rambling, thanks all in advance.

Replies are listed 'Best First'.
Re: constructing dynamic class definitions at compile time (from schema definition)
by dragonchild (Archbishop) on Jul 25, 2002 at 14:32 UTC
    broquaint's answer is right, but I want to address a problem with your question.

    You're creating classes to represent tables, in a 1-1 relationship. I'm trying to figure out why.

    Wouldn't it be better to create classes that represent logical concepts in your business logic? Then, go ahead and create tables that represent the fully normalized way to store any data needed for those classes to work. Within the class, you map between the logical and structural.

    My gripe is because this seems to be a knee-jerk reaction, especially in systems where the applications developer is also the DBA (or, at least, plays one on TV). This results in extremely unmaintainable code. For example, what if your schema changes? Your entire application code now has to change. (This is called "tight coupling".)

    Better is to do what I suggested, and let your schema and application change independent of each other. The only coupling you have is in the guts of your classes. (This is called "loose coupling".)

    ------
    We are the carpenters and bricklayers of the Information Age.

    Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement.

      Actually , its tight coupling that I'm trying to avoid.

      The business problem is this: I'm loading a bunch of data into a database via the .bcp mechanism. The data is loaded into a "raw" table, and from there I sql over the raw table and insert into a more 'refined' version of the table.

      To represent this, I have a RawTable class, whose attributes are the columns of the corresponding raw table. Every time I add / remove a column, I have to change the class definition of the RawTable ( too tightly coupled ). If the class builds its definition from the raw table schema, then I never have to change the RawTable class; it picks up the changes automatically.

      Thus, if the schema changes (ie remove/add a column), exactly the _reverse_ happens to what you described above: I have to change _no_ code.

      Thus in this case, I don't have to worry about the schema and application being too closely coupled; this close coupling is actually a Good Thing.

      Granted, for larger problems, all of your points above are quite valid.

        Fair enough. However, I would still say that you shouldn't dynamically generate a class from a table.

        Let's say that a column does change in your RawTable. The corresponding API of the RawTable class changes. How are the clients of that class supposed to know that the API has changed? What if it's a column that disappears? Your API contract is now invalid. Not good.

        ------
        We are the carpenters and bricklayers of the Information Age.

        Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement.

Re: constructing dynamic class definitions at compile time (from schema definition)
by broquaint (Abbot) on Jul 25, 2002 at 14:26 UTC
    The problem of dynamic packages can be solved by a string eval()
    my $pkg = "Foo"; eval qq{ package $pkg; sub new { bless [], shift } }; print ref $pkg->new; __output__ Foo
    However this is a pretty nasty approach (I'm using string eval() for the love of ...). But if you're just attaching methods and attribs to a package in a fairly starightforward fashion you don't even need the package declaration
    my $pkg = "Foo"; { no strict 'refs'; *{"${pkg}::new"} = sub { bless [], shift }; } print ref $pkg->new; __output__ Foo
    Now you can have your dynamically created package just by fully qualifying the likes of subroutines and variables. So while it's possible to do what you want (this is Perl afterall) the question begs to be asked: why would you want to?
    HTH

    _________
    broquaint