my fellow perlmonks and i have hit what i'm sure is a common but tricky problem: we are just in the process of 'porting' our 60K+ lines of pure perl from using mysql to DB2. it has not been fun... don't anyone convince you that it's as simple as changing the DBI::connect call, cause it's not.
our problem is that we have tons of OO perl modules that 'want' to be able to start and/or commit one or more transaction(s). the problem is that of managing these transactions from client code, since sometimes one is left with what are effectively 'nested' transaction blocks. DB2 simplifies things a little by only committing, and not explicitly starting a transaction, thereby eliminating the nested nature of some of these transactions. however, managing committal and rollbacks of data on error is still tricky. our solution has been to remove all 'commit' calls from all modules and to control transactions from client (script) code, which is somewhat tedious, since there are also many scripts to fix.
An alternative solution is to put 'commit' and 'rollback' code into a DESTROY{} or END{} block, and either commit or rollback based on whether there have been any problems during execution. the problem with this approach is that DBI objects are DESTROY'ed before the enclosing object is, and thus the $dbh expires before the DESTROY block is called. overriding the DBI::db::DESTROY{} sub works ok, though this is hardly a desirable solution, especially when mod_perl and connection pooling are introduced.
so, the final question is this: what is the 'best' way to ensure that successful code performs a 'commit' or a 'rollback' before program exit, preferably without having to perform commit's and rollback's in all client (script) code?
In reply to commtting db transactions by d_i_r_t_y
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |