Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi all, I'm trying to load multiple rows into table A and table B in an mssqlserver database. i use pseudocode below because the details are not at all important. the below also assumes i have connected to a database successfully:
eval { #table A foreach line (...) { prepare, execute, commit } #table B prepare, execute, commit } if ( $@ ) { rollback; }
i have two questions regarding the above pseudocode:

1) after loading 1552 rows into table A, sqlserver generates a "String or binary data would be truncated" error. The code jumps out of the eval block and hits the rollback statement. the end result is that table A has 1552 rows and table B has 0 rows. Shouldn't table A be rolled back to 0? this leads me to question #2

2) Should the pseudocode look more like below:

eval { #table A foreach line (...) { prepare, execute } #table B prepare, execute } if ( $@ ) { rollback } else { commit; }
iow, should i be committing after each execute or after performing all executes? The latter seems like it would be alot more efficient, but is it possible?

Replies are listed 'Best First'.
Re: DBI and transactions
by hiseldl (Priest) on Jul 10, 2002 at 18:33 UTC
    >> Shouldn't table A be rolled back to 0?

    no. Since, in your code, you are committing each row, a rollback will only affect the current row being executed. if you want the entire block to rollback, you will have to commit only at the end of the transaction.

    so you should use the code you listed in part 2.

    eval { #table A foreach line (...) { prepare, execute } #table B prepare, execute } if ( $@ ) { rollback } else { commit; }
    Also, while using DBI, make sure you turn off autocommit, i.e.:
    my $dbh = DBI->connect('DBI:MSSQL:database', {AutoCommit => 0}, ) or die "Couldn't connect to database: " . DBI->errstr;
    otherwise you may be committing each row automatically; it depends on the DBD default setting.

    --
    .dave.

Re: DBI and transactions
by perrin (Chancellor) on Jul 10, 2002 at 18:24 UTC
    Your second example is correct. You could also just put the commit inside the eval but after all of the work on both tables.
Re: DBI and transactions
by mpeppler (Vicar) on Jul 10, 2002 at 19:35 UTC
    To add a little warning regarding transactions when using MS-SQL (or Sybase) - be sure that your transaction log is large enough to hold all of the rows that you update/insert before the commit call.

    When the row is inserted it first written to the transaction log. Then when commit() is called it is moved from the transaction log to the permanent location, hence the need to have a properly sized transaction log if you need to run large transactions.

    Michael

Re: DBI and transactions
by zaimoni (Beadle) on Jul 10, 2002 at 22:39 UTC

    Before even considering commit vs. rollback, you should add AutoCommit=>0 to the DBI constructor call. Otherwise, the DBI will automatically commit all transactions, rendering both the rollback and commit calls meaningless.

    The 1552 rows in table A are most likely from failure to use AutoCommit=>0 in the DBI constructor. Databases that don't fully expose transactions (e.g., MySQL hides its transactions from the user) will fatally error with AutoCommit=>0; others, such as MS SQL, should do what you expect once that's in.

    The commit call is in the right location for the desired effect.