use strict;
use warnings;
use Benchmark qw( cmpthese );
use DBI;
my $dbc=DBI->connect('DBI:Pg(RaiseError=>1,AutoCommit=>0):dbname=...')
+;
my $dba=DBI->connect('DBI:Pg(RaiseError=>1,AutoCommit=>1):dbname=...')
+;
cmpthese ( 10, {
'ac' => sub { &inserts($dba, 'auto', 1000, 0 ) },
'mc' => sub { &inserts($dbc, 'manual', 1000, 1 ) },
});
sub inserts
{
my($dbh, $table, $rows, $commit)=@_;
my $stmt=qq[ INSERT INTO $table ( id, val ) VAlUES ( ?, ? ) ];
eval {
$dbh->rollback if($commit);
my $sth=$dbh->prepare($stmt);
foreach my $row ( 1..$rows )
{
$sth->execute( $row, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdef
+ghijklmnopqrstuvwxyz/' );
}
$dbh->commit if($commit);
};
if($@)
{
print STDERR "Insert Failed: $stmt: $@";
$dbh->rollback;
die "Program Terminated";
}
return;
}
END { $dbc->disconnect; $dba->disconnect };
Here are some results.
#run 1
Benchmark: timing 10 iterations of ac, mc...
ac: 33 wallclock secs ( 0.33 usr + 0.56 sys = 0.89 CPU) @ 11.24/s (
+n=10)
mc: 3 wallclock secs ( 0.38 usr + 0.35 sys = 0.73 CPU) @ 13.70/s (
+n=10)
#run 2
Benchmark: timing 10 iterations of ac, mc...
ac: 37 wallclock secs ( 0.41 usr + 0.81 sys = 1.22 CPU) @ 8.20/s (
+n=10)
mc: 4 wallclock secs ( 0.37 usr + 0.50 sys = 0.87 CPU) @ 11.49/s (
+n=10)
#run 3
Benchmark: timing 10 iterations of ac, mc...
ac: 38 wallclock secs ( 0.48 usr + 0.60 sys = 1.08 CPU) @ 9.26/s (
+n=10)
mc: 4 wallclock secs ( 0.38 usr + 0.40 sys = 0.78 CPU) @ 12.82/s (
+n=10)
Note that I am comparing wall clock time since the perl code has very little to do. I ran 3 runs so that a representative sample could be obtained. This is running against PostgreSQL as the backend on the local host so there is minimal communication overhead.
Committing after each 1000 rows in this test consistantly yields a 10 fold increase in speed over using AutoCommit. As usual YMMV and will certainly vary if you use a different database engine. Also note that using the bulk data importer from a text file containing the same data takes less than 1 second to complete while running 1 insert with 1 commit for 10000 rows takes about 3 seconds.
The data set size in this test is only 663k of data. I am estimating that a significant portion of the time difference is that when commit returns, the database pledges that the data has been written to durable media. So for the manual commits this happens 10 times whereas for the AutoCommit this occurs 10000 times. If that were all the variability then manual commit would be 1000 times faster instead of 10 times so the actual writing of the data constitutes a big portion of the time and that, as mentioned, is the same for any approach. |