Evanovich has asked for the wisdom of the Perl Monks concerning the following question:
Yeast1 is a large table: 6265 rows and about 100 columns so far, and growing to about 400. This loop presently takes hours to complete, and gets longer as I load more data into yeast1. Does anyone know how I can optimize this DBI query? thanks, Evan $sth = $dbh->prepare("UPDATE yeast1 set $tables$_-1 = 999for (1..$#{$columns}) { $sth = $dbh->prepare("UPDATE yeast1 set $tables[$_] = temp/$tables[$_ +-1] where yeast1.orf = temp.orf; INSERT INTO profile_index VALUES ('$ +tables[$_-1]', '$columns->[$_]')"); $sth->execute; $sth = $dbh->prepare("UPDATE yeast1 set $tables[$_-1] = 999 where $ta +bles[$_-1] IS NULL"); $sth->execute; }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Perl DBI Performance
by runrig (Abbot) on Aug 18, 2001 at 04:21 UTC | |
by Evanovich (Scribe) on Aug 18, 2001 at 04:43 UTC | |
by Trimbach (Curate) on Aug 18, 2001 at 05:34 UTC | |
|
Re: Perl DBI Performance
by perrin (Chancellor) on Aug 18, 2001 at 08:48 UTC | |
|
Re: Perl DBI Performance
by htoug (Deacon) on Aug 19, 2001 at 18:59 UTC | |
|
Re: Perl DBI Performance
by busunsl (Vicar) on Aug 20, 2001 at 10:40 UTC |