in reply to commtting db transactions

You can do it by subclassing DBI - possible, but not easy.
You can then let all your code do all the commits that they want to, without it affecting the DB, by overloading the commit method in DBI::db::commit as a noop (just a return :) Probably you should log where the commits are called from, to be able to ferret them out of the code later.
Rollback could be extended with errortraceback if neccesary.
At the end you could do a 'real' commit to commit transactions before disconnecting.

Hmmm.........
It is normally a Good Thing (tm) to have short transactions.
Nested transactions are often a sign of design failures.

I would be very cautious with code that forced me to do things like those you describe.

Replies are listed 'Best First'.
Re: Re: commtting db transactions
by d_i_r_t_y (Monk) on Aug 09, 2001 at 18:24 UTC
    thanks for the reply...
    You can do it by subclassing DBI - possible, but not easy.

    i actually started down this track, until i read the source of the DBI module and realised how many objects/packages were involved/would have to be overriden...

    It is normally a Good Thing (tm) to have short transactions. Nested transactions are often a sign of design failures.

    i agree that transactions (and thus table/column locks) should be kept short, however, in this case, all 'mini-transactions' must either all fail, or all succeed. hence, it seems the only answer is to perform a single, rather large transaction commit/rollback right at the end.

    d_i_r_t_y

      I am also looking into adding transaction support to a large Perl web application. I currently think the best way to add transaction support is by passing the responsibility to the caller (client) code. The basic problem is to define how large a "unit of work" is in your application. Each unit of work should have one transaction. I don't think that the API should dictate to the caller how large a unit of work is.