d_i_r_t_y has asked for the wisdom of the Perl Monks concerning the following question:

fellow monks,

my fellow perlmonks and i have hit what i'm sure is a common but tricky problem: we are just in the process of 'porting' our 60K+ lines of pure perl from using mysql to DB2. it has not been fun... don't anyone convince you that it's as simple as changing the DBI::connect call, cause it's not.

our problem is that we have tons of OO perl modules that 'want' to be able to start and/or commit one or more transaction(s). the problem is that of managing these transactions from client code, since sometimes one is left with what are effectively 'nested' transaction blocks. DB2 simplifies things a little by only committing, and not explicitly starting a transaction, thereby eliminating the nested nature of some of these transactions. however, managing committal and rollbacks of data on error is still tricky. our solution has been to remove all 'commit' calls from all modules and to control transactions from client (script) code, which is somewhat tedious, since there are also many scripts to fix.

An alternative solution is to put 'commit' and 'rollback' code into a DESTROY{} or END{} block, and either commit or rollback based on whether there have been any problems during execution. the problem with this approach is that DBI objects are DESTROY'ed before the enclosing object is, and thus the $dbh expires before the DESTROY block is called. overriding the DBI::db::DESTROY{} sub works ok, though this is hardly a desirable solution, especially when mod_perl and connection pooling are introduced.

so, the final question is this: what is the 'best' way to ensure that successful code performs a 'commit' or a 'rollback' before program exit, preferably without having to perform commit's and rollback's in all client (script) code?

Replies are listed 'Best First'.
Re: commtting db transactions
by trantor (Chaplain) on Aug 09, 2001 at 10:36 UTC

    One solution could be moving $dbh variables to a higher level scope, so that the objects they refer to are not disposed of while you still need them. Objects may still see them as lexicals through references, aliasing, using our, whatever.

    You could also use the eval BLOCK syntax to check how a certain section of code behaves and then go for a commit or a rollback. This way, you eval BLOCK acts a little "program", but when it "exits", control returns to your code and not to the operating system. Exceptions can be caught this way.

    Regarding portability, I found the DBI interface to be very database independent. Unfortunately there is no SQL abstraction layer in DBI. This, in conjunction with the fact that many programmers think they're smarter that the quote method, yields to total mess when SQL statements have to be ported.

    Like I said the typical example is quoting in SQL strings, happily done with the s/// operator (if done at all) and not with the database independent quote method.

    Also, variable interpolation in double quotish strings is sometimes abused, while placeholders could do a much better job, e.g. deciding where a value should be enclosed in quotes or not. Some RDBMSs (for example DB2) might be very sensitive on this point, while others (namely MySQL) aren't so fussy...

    The third aspect in database independence is always related to SQL statements, but this time it's about using one RDBMS' own functions. MySQL has loads of functions and they can be very useful indeed, but one must be prepared to reimplement them in another RDBMS when not available (DB2 allows this) or to find the equivalent (e.g. UPPER() in some RDBMSs is UCASE in others).

    Finally, SQL can be used for querying, inserting, changing, deleting records but also for database administration tasks, and the latter aspect is almost always unportable regardless of DBI's completeness.

    -- TMTOWTDI

Re: commtting db transactions
by mitd (Curate) on Aug 09, 2001 at 10:39 UTC
    I am little confused about original environment since MySQL does not support native commit and rollback. So has the original pre-port code got a lot of 'simulated' commit and rollback code ??

    That being said here are a couple things to ponder that may help.

    It is common in DBI to wrap transactions in eval. You may already know this but I state the obvious just in case :)

    eval { do some transcations $dbh->commit() }; $dbh->rollback() if $@;
    Also DB2 supports 4 transaction isolation levels: Repeatable Read, Cursor Stability(default), Read Stabilty, Uncommited Read. The good news is these may help to simplify transcations. Bad news DBD::DB2 supports this through an external variable TXNISOLATION found in db2cli.ini. It cannot be changed in SQL so you only get one choice and it is globally applied across your entire database.

    Hope this may give you some ideas. Good Luck.

    mitd-Made in the Dark
    'My favourite colour appears to be grey.'

      I am little confused about original environment since MySQL does not support native commit and rollback. So has the original pre-port code got a lot of 'simulated' commit and rollback code ??

      not true, transactions were introduced in version 3.23. so, surprising as it is, we did indeed have real transactions with mysql ;-) the only 'trick' is that we had to use the $dbh->do('begin'); ... ; $dbh->do('commit'); syntax rather than the begin and commit methods per se.

      d_i_r_t_y

Re: commtting db transactions
by htoug (Deacon) on Aug 09, 2001 at 14:33 UTC
    You can do it by subclassing DBI - possible, but not easy.
    You can then let all your code do all the commits that they want to, without it affecting the DB, by overloading the commit method in DBI::db::commit as a noop (just a return :) Probably you should log where the commits are called from, to be able to ferret them out of the code later.
    Rollback could be extended with errortraceback if neccesary.
    At the end you could do a 'real' commit to commit transactions before disconnecting.

    Hmmm.........
    It is normally a Good Thing (tm) to have short transactions.
    Nested transactions are often a sign of design failures.

    I would be very cautious with code that forced me to do things like those you describe.

      thanks for the reply...
      You can do it by subclassing DBI - possible, but not easy.

      i actually started down this track, until i read the source of the DBI module and realised how many objects/packages were involved/would have to be overriden...

      It is normally a Good Thing (tm) to have short transactions. Nested transactions are often a sign of design failures.

      i agree that transactions (and thus table/column locks) should be kept short, however, in this case, all 'mini-transactions' must either all fail, or all succeed. hence, it seems the only answer is to perform a single, rather large transaction commit/rollback right at the end.

      d_i_r_t_y

        I am also looking into adding transaction support to a large Perl web application. I currently think the best way to add transaction support is by passing the responsibility to the caller (client) code. The basic problem is to define how large a "unit of work" is in your application. Each unit of work should have one transaction. I don't think that the API should dictate to the caller how large a unit of work is.
Re: committing db transactions
by busunsl (Vicar) on Aug 09, 2001 at 11:35 UTC
    I'm not familliar with DB2, but you should be able to nest transactions and have them work 'as expected'.
    I would control the transactions in the modules with explicitly stating begin tran, commit and rollback and check before every begin tran whether a transaction already exists or not. By that you can avoid the nesting and have one start and one end for each transaction.