cLive ;-) has asked for the wisdom of the Perl Monks concerning the following question:

I want to do write a script that stores/reads data in/from a database. Initially, I will use MySQL/Apache/Linux for development.

Eventually, I need to be able to port it to different OSs/databases.

I was thinking the best way to do this would be:

Rather than use a 'chainsaw' approach, I wanted to be able to define fields, tables etc using a common notation, and then translate it to the nearest equivalent for each DB used. Ie, I'm looking for a kind of 'lowest common denominator' of SQL and var types that I can safely use. Has anyone else done anything similar and, if so, what did you use for the following:

Or am I hoping for too much in being able to write 'cross DB' scripts?

cLive ;-)

I thought it worth while asking now (before I end up that creek without a paddle :) just in case I'm missing something...

  • Comment on Cross platform commonality with the Perl DBI

Replies are listed 'Best First'.
(jptxs) Re: Cross platform commonality with the Perl DBI
by jptxs (Curate) on Apr 12, 2001 at 05:41 UTC
    I think your real challenge will lie not is making it cross database, but in keeping the performance up to snuff and doing that at the same time. Most people think in terms of feature issues when they think of porting, but as a performance analyst of sorts I see the other side of the coin. When I work with some of these e-application platforms and ERP (Vignette, BEA, PeopleSoft, SAP, etc.), they all put high value on their DB independence. But every one of them pays a price on the performance side. Either they have huge routines to go through to optimize for each DB and then do checks to see which one it is; or they end up writing such generic SQL that it does not take advantage of any of the optimizations that these DBs make available in order to perform really well.

    The moral here is to be aware of performance considerations as you go forward and try and be DB independent. Remember too that people will bend from need and if what you're doing provides enough value to them, they will support the DB you design and optimize for because they need what you are doing.

    "A man's maturity -- consists in having found again the seriousness one had as a child, at play." --Nietzsche
Re: Cross platform commonality with the Perl DBI
by lachoy (Parson) on Apr 12, 2001 at 08:02 UTC

    One answer: use SPOPS, which is probably getting a new release within a week. Basically, you get the normal operations (create/update, remove, fetch record) super easy and searching/fetching multiple records just easy. You don't need to deal with datatypes, quoting, whether you're using an AUTO_INCREMENT field (MySQL) or an IDENTITY (Sybase/MS SQL) field -- it just works. For a simple example of how it works, see Re: Re: Re: Object Persistence to PostGRESQL.

    Chris
    M-x auto-bs-mode

Re: Cross platform commonality with the Perl DBI
by knobunc (Pilgrim) on Apr 12, 2001 at 17:03 UTC

    Simple SQL functions are okay (insert, delete, update). Outer joins have different (non-standard) syntax everywhere they are supported (Postgres does not yet support it, mysql does not). Some DBs support subqueries using standard SQL syntax (mysql does not). Some DBs support autoincrement fields (mysql, postgresql) others (Oracle) have sequences that you can use to do the same thing. Aggregate functions (min, max, count, ...) are pretty standard. Other functions are not. Cursors are completely nonstandard. Dates and time are nonstandard. All bets are off for stored procedures.

    For defining the schema, each RDBMS has a different set of datatypes that it supports. You can use the standard SQL types if you want since most RDBMS map them to their internal types. Basically it is a mess.

    If you want to write generic queries then you can use an ODBC driver since it will map 'standard' SQL to the DB flavor. But it is pretty slow.

    What I do is wrap all code that accesses the DB in a subroutine so that my main code makes a standard call and passes in a DB handle that it got from another subroutine I have. Then inside the accessor routine the can do whatever it needs to get the data in a useful form and return it. I try to write 'generic' SQL and avoid any DB specific stuff, but at some point you will need to rely on something DB specific. At that point you can have the subroutine determine which RDBMS it is talking to and execute the correct code. I like to wrap anything that needs access to an autoincrement call with a stored procedure since the way you call them is somewhat standard and I can hide the details of getting the next ID in there.

    For handling the table definition, you could write standard SQL and then have a perl script translate all of the datatypes from the generic form to the DB specific one. The problem here is deciding how to handle the autoincrement columns.

    Good luck. The problem is not insurmountable, but it is twisty. You also have to make sure that you don't become DB dependent due to convenience. If you must do something that you know is DB dependent, flag it so you can find it later.

    -ben