sub get_udx_media_relations { my $SupplierID = shift; my $SupplierPID = shift; my $return; my $query = qq|SELECT ProviderID, SourceID, Folder, Filename, Extension, Purpose, Rank, Priority, Width, Height FROM CT_Media_Relations WHERE SupplierID = '$SupplierID' AND SupplierPID = '$SupplierPID' AND isDeleted != 'Yes' GROUP BY Filename, Extension ORDER BY Priority DESC, Rank ASC|;
No placeholders here. That prevents DBI, DBD and the database itself from caching the query, so it has to be parsed again from zero each and every time. Even if you only change the IDs. Using placeholders and prepare_cached() instead of prepare() could improve speed, at least when used in a persistent environment (i.e. mod_perl, FastCGI, basically everything but plain old CGI). And, even worse, you might become a victim of SQL injection. See https://bobby-tables.com/.
while ($_ = $sth->fetchrow_hashref) { if ($prev_priority->{$_->{Purpose}} > $_->{Priority}) { next unless $_->{Purpose} eq 'PDB'; } #get rid of mds when manufacturer is present (4 vs 5) if ($_->{Purpose} eq 'AAB' or $_->{Purpose} eq 'EBD' or $_->{Purpose} eq 'PBS' or $_->{Purpose} eq 'PRI') { if ($prev_priority->{IMG} > $_->{Priority}) { next; } else { $prev_priority->{IMG} = $_->{Priority}; } } $prev_priority->{$_->{Purpose}} = $_->{Priority}; if ($_->{Purpose} eq 'PRI' && !$return->{Primary}->{path}) { $return->{Primary} = $_; $return->{Primary}->{path} = '/dx/' . $_->{ProviderID} . "/" . $_->{SourceID}; $return->{Primary}->{filename} = $_->{Filename} . '.' . $_->{Extension}; $return->{Primary}->{filename} = $_->{Folder} . '/' . $return->{Primary}->{filename} if $_->{Folder}; } my $media = $_; $media->{path} = '/dx/' . $_->{ProviderID} . "/" . $_->{SourceID}; $media->{filename} = $_->{Filename} . '.' . $_->{Extension}; $media->{filename} = $_->{Folder} . '/' . $media->{filename} if $_->{Folder}; push(@{$return->{$_->{Purpose}}}, $media); }
I did not even try to fully understand what happens here. But you are fetching data from the database just to discard it, based on values of the "Purpose" and "Priority" columns. That won't speed up things. Try not to fetch data that you don't need. Make the database filter out everything you don't need, i.e. put your criteria into the WHERE condition. That way, the database won't have to find data you don't need, fetch it, serialize it, and push it through the connection to your application where the DBD has to deserialize it, and all of that just to discard that unwanted data.
If the remaining code looks similar, that's where you can really improve performance. Use placeholders everywhere to allow at least the database to cache queries, and filter in the database, not in perl.
Alexander
In reply to Re^3: performance with mysql / file-caching / hash reference on demand
by afoken
in thread performance with mysql / file-caching / hash reference on demand
by derion
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |