in reply to Re: How to optimized mysql insert into table
in thread How to optimized mysql insert into table
Update: Performance Benchmark
To demonstrate my point that running a single transaction for all of the inserts will make a HUGE difference, I offer the following benchmark code that makes a test DB table with 170,568 records.
I used SQLite so that this can be run by anybody. If anything MySQL will be faster. I am using an ancient WinXP laptop.
Results:
Doing a commit after every insert = 4.9 hours!
Doing a Single commit after all 170,568 inserts = 3 seconds!!!
That is what I call a HUGE difference!
Since my laptop takes about 5 hours the OP's reported time of 4 hours sounds plausible. My guess is that a table creation like I show below will reduce the OP's execution time from 4 hours to 1-2 seconds. Nothing really fancy required, just some solid code.
#!/usr/bin/perl use strict; use warnings; use Data::Dumper; use DBI qw(:sql_types); my $dbfile = "./TestingInsert.sqlite"; if (-e $dbfile) {unlink $dbfile or die "Delete of $dbfile failed! $!\n +";} my $start_seconds = time(); my $dbh = DBI->connect("dbi:SQLite:dbname=$dbfile","","",{RaiseError = +> 1}) or die "Couldn't connect to database: " . DBI->errstr; $dbh->do ("CREATE TABLE test ( id integer PRIMARY KEY AUTOINCREMENT, version integer, jobnumber integer ); "); $dbh->do("BEGIN"); #running one transaction is needed for speed!!! import_data(); $dbh->do("COMMIT"); my $end_time = time(); print "Total elasped time: ", $end_time-$start_seconds, "\n"; sub import_data { my $jobnumber = 1; my $add = $dbh->prepare("INSERT INTO test ( version, jobnumber ) VALUES(?,?)"); while ($jobnumber < 170568) { $add->execute($jobnumber, $jobnumber++); #DBI insert } return; } __END__ Using a comitt for each insert: Total elasped time: 17833 => 297 min => 4.9 hours Using a single transaction for 170,568 inserts: Total elasped time: 3 seconds
|
|---|