fionbarr has asked for the wisdom of the Perl Monks concerning the following question:

I am using the following to insert 26,000 records into a sqlserver database. It takes 7 minutes. I would appreciate suggestions to make it faster.
print "\n\t\tdo you want to write out the table (Y/N) "; my $ans = (<STDIN>); if ($ans =~ /y/i) { # say "\n\t\tDeleting Current Week: $weeknumber from Server Patch +table"; my $sql = qq(DELETE FROM WFSServer_Patch WHERE weeknum = $weeknum +ber); my $rows = $dbh->do($sql); say "\n\t\tWriting Server Patch Table"; my $sth = $dbh->prepare(<<SQL); insert into WFSServer_Patch (weeknum, server, patch) values (?, ?, ?) SQL foreach my $server (sort keys %server_patch) { foreach my $desc (sort keys %{ $server_patch{$server}} ) { $sth->execute($weeknumber, $server, $desc) or die "can't e +xecute statement: $DBI::errstr\n"; } } }

Replies are listed 'Best First'.
Re: speedup sqlserver insert
by choroba (Cardinal) on Sep 12, 2014 at 13:23 UTC
    Is AutoCommit on? What part of the time is taken by the DELETE?
    لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ
      I'll check on autocommit...I have just timed the insert; the delete is negligable.

        Have you investigated the bulk load facilities of your server? Most likely, large(r) bulk inserts are done faster by making the server process read from a CSV file.

        I wouldn't bother turning off AutoCommit as it is a global setting and you'll have to call commit everywhere you run any SQL (often even select sql). Just start a transaction with begin_work before the prepare for the insert and commit it at the end of your loop.

        If you do general searches for speedups you'll find loads of other possible speedups. Bulk loading is one someone else already mentioned, DBIs execute_array, disabling indexes on your table until after the insert etc

        UPDATE: Also, what is the point of sorting those hashes - it won't affect what rows are inserted into the database as fas as I can see.