in reply to performance with mysql / file-caching / hash reference on demand
BTW, sometimes, the cheapest solution is to upgrade the hardware.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: performance with mysql / file-caching / hash reference on demand
by derion (Sexton) on May 02, 2021 at 09:17 UTC | |
Thank you very much for your reply. The webshop is built on the base of a Links management system which would work with mod_perl. My webserver is a couple of years old but still is performant and works with SSD:
At the moment a detailed page needs about 1.1 seconds to display but I want to add significant more information and I would like to get faster - at least not slower. One code example I am using to get media (images, datasheets) from the DB:
My thought was that it could make sense to put these results in a static whatever state (database, file) on a daily basis to avoid doing the same routine over and over agein. Cheers derion | [reply] [d/l] [select] |
by afoken (Chancellor) on May 02, 2021 at 11:05 UTC | |
No placeholders here. That prevents DBI, DBD and the database itself from caching the query, so it has to be parsed again from zero each and every time. Even if you only change the IDs. Using placeholders and prepare_cached() instead of prepare() could improve speed, at least when used in a persistent environment (i.e. mod_perl, FastCGI, basically everything but plain old CGI). And, even worse, you might become a victim of SQL injection. See https://bobby-tables.com/.
I did not even try to fully understand what happens here. But you are fetching data from the database just to discard it, based on values of the "Purpose" and "Priority" columns. That won't speed up things. Try not to fetch data that you don't need. Make the database filter out everything you don't need, i.e. put your criteria into the WHERE condition. That way, the database won't have to find data you don't need, fetch it, serialize it, and push it through the connection to your application where the DBD has to deserialize it, and all of that just to discard that unwanted data. If the remaining code looks similar, that's where you can really improve performance. Use placeholders everywhere to allow at least the database to cache queries, and filter in the database, not in perl. Alexander
-- Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-) | [reply] [d/l] [select] |
by derion (Sexton) on May 02, 2021 at 12:20 UTC | |
Thank you very much for your feedback. The effect of placeholder and caching was new to me but makes sense. For the security issue you are right in general but in this case I think there is no risk because only INT requests from the product data find their way to this subroutine. Nevertheless I will clean up my stuff with placeholders which seem to be better in any way. I will also rethink the logic in this operation. The big but is that I do have categories of images (single, item in use, primary) and several sources for images where one can have one or more categories of images. I want to display the image group with the highest priority for every category and discard the rest. If a product is edited by staff I still want to access all available images which could result in an editorial pick for every image category. A small part of the idea is the following:
AAB (product in use) could be two images from source with Priority 5 I start do doubt what I am doing is clever even with prices where I originally thought the way is obvious:
I order the prices by amount and price. If an amount equals the previous amount the price will be skipped and if a price is higher than the prevoius price it will be skipped either. First could occur based on rounding and calculation, the second should not happen but gives me some kind of security. As said, I am happy for your input because it makes me question myself and brings up new ideas for me. Cheers derion | [reply] [d/l] |
by hippo (Archbishop) on May 02, 2021 at 13:53 UTC | |
by derion (Sexton) on May 02, 2021 at 14:56 UTC | |