In trying to tune my DBI-driven database middleware, I tried what is certainly poor form and got a surprise.
The mysql database has just two columns, col_1 & col_2, both smallint, but over 12-million rows. Neither column can be guaranteed unique values, but combinations of them are definitely unique. So, I created a compound Primary Key.
I want to count how many rows with the same col_2 value have either x or y in col_1.
To benchmark the database performance independently of the Perl script, I telnetted in and did:
select count(*) from theTable A inner join theTable B where (A.col_1 = x and B.col_1 = y and A.col_2 = B.col_2);
which gave me an average response time of ~0.13 sec. (Many thanks to those in the other thread who encouraged me to try various index configurations)
Then, I wondered how much time the join was adding, so I wrote a perl script:
Essentially doing the join by finding the intersection of the two query result arrays myself in Perl. This had an average response time of ~0.10 sec! I expect it will be evn better if I replace the repetitive prepares with placeholders.$sql = "SELECT col_2 FROM theTable WHERE col_1 = $value_1"; $sth = $dbh->prepare($sql) or die("Could not prepare!" . $dbh- +>errstr); $sth->execute() or die("Could not execute 1!" . $dbh->errstr); while ($data = $sth->fetchrow_array()) { push(@col_1, $data); } $sql = "SELECT col_2 FROM theTable WHERE col_1 = $value_2"; $sth = $dbh->prepare($sql) or die("Could not prepare!" . $dbh- +>errstr); $sth->execute() or die("Could not execute 2!" . $dbh->errstr); while ($data = $sth->fetchrow_array()) { push(@col_2, $data); } foreach $e (@col_2) { $union{$e} = 1 } foreach $e (@col_1) { if ( $union{$e} ) { $isect{$e} = 1 } } @isect = keys %isect;
Now, I expect to be savaged for moving database functinality to Perl code, but if it works better......
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |