Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re: Re: Database input speed question

by mpeppler (Vicar)
on Jul 31, 2003 at 18:16 UTC ( [id://279702]=note: print w/replies, xml ) Need Help??


in reply to Re: Database input speed question
in thread Database input speed question

I was going to suggest something similar, but there are some issues with this as well.

First, one must make sure to still commit at regular intervals. The original poster mentioned 10M rows (I'm guessing 10 million) - you'd have to have a rather large transaction log if you're going to batch all of these rows in a single transaction.

Second, batching may not necessarily gain you all that much - certainly with Sybase the total amount of work that the database needs to do whether you batch transactions or not is the same - the only real difference is that rows are only committed to the "real" table when the server gets the COMMIT TRAN command instead of placing each row there individually (i.e. the total amount of disk IO is the same).

Michael

Replies are listed 'Best First'.
Re: Re: Re: Database input speed question
by dga (Hermit) on Jul 31, 2003 at 20:49 UTC

    When in doubt, Benchmark

    I made up the following benchmark to see the effect of AutoCommit on inserting rows in bulk.

    use strict; use warnings; use Benchmark qw( cmpthese ); use DBI; my $dbc=DBI->connect('DBI:Pg(RaiseError=>1,AutoCommit=>0):dbname=...') +; my $dba=DBI->connect('DBI:Pg(RaiseError=>1,AutoCommit=>1):dbname=...') +; cmpthese ( 10, { 'ac' => sub { &inserts($dba, 'auto', 1000, 0 ) }, 'mc' => sub { &inserts($dbc, 'manual', 1000, 1 ) }, }); sub inserts { my($dbh, $table, $rows, $commit)=@_; my $stmt=qq[ INSERT INTO $table ( id, val ) VAlUES ( ?, ? ) ]; eval { $dbh->rollback if($commit); my $sth=$dbh->prepare($stmt); foreach my $row ( 1..$rows ) { $sth->execute( $row, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdef +ghijklmnopqrstuvwxyz/' ); } $dbh->commit if($commit); }; if($@) { print STDERR "Insert Failed: $stmt: $@"; $dbh->rollback; die "Program Terminated"; } return; } END { $dbc->disconnect; $dba->disconnect };

    Here are some results.

    #run 1 Benchmark: timing 10 iterations of ac, mc... ac: 33 wallclock secs ( 0.33 usr + 0.56 sys = 0.89 CPU) @ 11.24/s ( +n=10) mc: 3 wallclock secs ( 0.38 usr + 0.35 sys = 0.73 CPU) @ 13.70/s ( +n=10) #run 2 Benchmark: timing 10 iterations of ac, mc... ac: 37 wallclock secs ( 0.41 usr + 0.81 sys = 1.22 CPU) @ 8.20/s ( +n=10) mc: 4 wallclock secs ( 0.37 usr + 0.50 sys = 0.87 CPU) @ 11.49/s ( +n=10) #run 3 Benchmark: timing 10 iterations of ac, mc... ac: 38 wallclock secs ( 0.48 usr + 0.60 sys = 1.08 CPU) @ 9.26/s ( +n=10) mc: 4 wallclock secs ( 0.38 usr + 0.40 sys = 0.78 CPU) @ 12.82/s ( +n=10)

    Note that I am comparing wall clock time since the perl code has very little to do. I ran 3 runs so that a representative sample could be obtained. This is running against PostgreSQL as the backend on the local host so there is minimal communication overhead.

    Committing after each 1000 rows in this test consistantly yields a 10 fold increase in speed over using AutoCommit. As usual YMMV and will certainly vary if you use a different database engine. Also note that using the bulk data importer from a text file containing the same data takes less than 1 second to complete while running 1 insert with 1 commit for 10000 rows takes about 3 seconds.

    The data set size in this test is only 663k of data. I am estimating that a significant portion of the time difference is that when commit returns, the database pledges that the data has been written to durable media. So for the manual commits this happens 10 times whereas for the AutoCommit this occurs 10000 times. If that were all the variability then manual commit would be 1000 times faster instead of 10 times so the actual writing of the data constitutes a big portion of the time and that, as mentioned, is the same for any approach.

      When in doubt, Benchmark
      Good point.

      I'll try to run that next week with Sybase ASE.

      Note that as with all things of this type a lot depends on the actual data being loaded, whether there are any triggers, indexes, etc...

      Michael

      A quick run of that code on my laptop (RH 7.3, Athlon 1.something GHz, Sybase ASE 12.5.0.3) shows that I was wrong, and that batched commits almost certainly improve the performance for large inserts:
      ac: 129 wallclock secs ( 2.77 usr + 0.40 sys = 3.17 CPU) @ 3.15/s ( +n=10) mc: 14 wallclock secs ( 2.05 usr + 0.40 sys = 2.45 CPU) @ 4.08/s (n +=10)
      This is with no indexes on the tables.

      Still, it's an interesting problem - I'll have to try this on some more serious iron, and with more data, and, ideally, with raw devices rather than disk devices that have O_DSYNC turned on.

      Michael

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://279702]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (7)
As of 2024-03-28 18:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found