in reply to Database input speed question
Another thing that I didn't see mentioned is that if you can't use a bulk loader (computed fields etc.), then if you can use a transaction and do the inserts then commit. If you have to use DBI to insert lots of data this will be a big time savings since the statement preparation only happens one time instead of once per record.
#... script starting stuff up here my $dbh=DBI->connect("DBI:Pg(RaiseError=>1, AutoCommit=>0):dbname=$dat +abase"); my $stmt="INSERT INTO table ( field1, field2 ) VALUES ( ?, ? )"; eval { $dbh->rollback; my $sth=$dbh->prepare($stmt); while(<INPUT>) { my($field1, $field2)=split; $sth->execute($field1, $field2); } $dbh->commit; }; if($@) { print STDERR "Data did not insert correctly: $@"; $dbh->rollback; }
This does 2 things: The inserts go a lot faster. The inserts go in as a group or not at all which makes cleaning up after a failure a lot easier.
The :Pg loads up a PostgreSQL connection which I used in this example since that's the database I use most.
The RaiseError=>1 turns failures in DBI calls into fatal errors which the eval traps and reports on.
The AutoCommit=>0 tells DBI not to commit records until you call commit();
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Re: Database input speed question
by mpeppler (Vicar) on Jul 31, 2003 at 18:16 UTC | |
by dga (Hermit) on Jul 31, 2003 at 20:49 UTC | |
by mpeppler (Vicar) on Aug 01, 2003 at 06:11 UTC | |
by mpeppler (Vicar) on Aug 03, 2003 at 06:26 UTC |