Your first stop should be your DBA or a schema dump. If the product and user name columns are properly indexed it shouldn't be taking a huge server toll. If it is indexed, I'd run the query through EXPLAIN (or the equivalent command if you are using a DBMS other than mysql) to see if it is using the index. Some DBMSs don't do a good job optimizing queries run on views, especially if temporary tables are involved (the GROUP BY clause often creates such a table). In that case do the query on the original tables rather than the views. Check with your DBA if you need special permissions set up for this.
Using Perl to optimize a database query should usually be a last resort.
If you really do insist on using Perl, two separate hashes while reading in rows is not the way to do it. The net effect of the original query is to select all unique pairings of user_name/product and print out the count of products per user. If you are intent on emulating the effect of GROUP BY and count(distinct), you really only need one hash:
my %hUsers; #find all unique combinations of user and product while(my @row = $res->fetchrow_array()) { my ($username, $product) = @row; $hUsers->{$username}{$product} = 1; } # print product counts per user foreach $user_name (sort keys %$hUsers) { print $user_name, ' ', scalar keys $hUsers{$user_name}, "\n"; }
A final note. If you decide to handle the product count per user in Perl, you might want to use DISTINCT in your query: SELECT DISTINCT username, product ... will reduce the number of rows returned by the server without the optimization problems sometimes caused by GROUP BY clauses.
Best, beth
Update: added code to emulate query.
Update: suggested use of DISTINCT.
In reply to Re: Perl/SQL Server Count Discrepancy
by ELISHEVA
in thread Perl/SQL Server Count Discrepancy
by muertetg
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |