Keep It Simple, Stupid | |
PerlMonks |
Re: Re: Re: Database input speed questionby dga (Hermit) |
on Jul 31, 2003 at 20:49 UTC ( [id://279782]=note: print w/replies, xml ) | Need Help?? |
When in doubt, Benchmark I made up the following benchmark to see the effect of AutoCommit on inserting rows in bulk.
Here are some results.
Note that I am comparing wall clock time since the perl code has very little to do. I ran 3 runs so that a representative sample could be obtained. This is running against PostgreSQL as the backend on the local host so there is minimal communication overhead. Committing after each 1000 rows in this test consistantly yields a 10 fold increase in speed over using AutoCommit. As usual YMMV and will certainly vary if you use a different database engine. Also note that using the bulk data importer from a text file containing the same data takes less than 1 second to complete while running 1 insert with 1 commit for 10000 rows takes about 3 seconds. The data set size in this test is only 663k of data. I am estimating that a significant portion of the time difference is that when commit returns, the database pledges that the data has been written to durable media. So for the manual commits this happens 10 times whereas for the AutoCommit this occurs 10000 times. If that were all the variability then manual commit would be 1000 times faster instead of 10 times so the actual writing of the data constitutes a big portion of the time and that, as mentioned, is the same for any approach.
In Section
Seekers of Perl Wisdom
|
|