Re: Perl/CGI + MySQL: How many calls is too many?
by JayBonci (Curate) on Feb 27, 2002 at 19:57 UTC
|
Each site is different. Some selects may be really quick to do, while others (say against millions of entries), could take forever. It all depends on the site, your hardware config, and especially the data you are trying to get... have you looked to see where your bottleneck is? Is it on the database or on the webserver? Largely, this depends on what you're selecting. While your question is rather vague, here are a few things that you may want to look into:
- Cache your entries in perl, and use them elsewhere. You might be able to shave some cycles off by throwing your results into a hashref and keeping them around a little longer.
- Check the index efficiency on your tables. I don't know if you have looked into the CREATE INDEX documentation, but you say you're using it properly. (mySQL documentation for Create Index).
- Change your table type. You might be able to do more with a table type that most fits your sql style. There are a few to choose from, each with their own optimizations.
DBI placeholders are a good start, but you may want to look into using perl in a few places to cache your results. It'd also help to know what types of things you are trying to do.
Reply to the post with more details, and we might be able to help you out more, but this is probably largely a mySQL question. Thanks.
| [reply] |
|
|
Cache your entries in perl, and use them elsewhere. You might be able to shave some cycles off by throwing your results into a hashref and keeping them around a little longer.
Excellent point. You might also want to consider the Cache::* modules for your caching needs. They offer fixed size cacheing, which can help you keep from running out of RAM/disk. The file based caches are very fast, and might be ideal for that less often needed data.
Also take a look at Perrin Harkin's excellent EToys article on perl.com for a great story on effective and intelligent cacheing.
| [reply] |
Re: Perl/CGI + MySQL: How many calls is too many?
by tjh (Curate) on Feb 27, 2002 at 20:43 UTC
|
In broader terms of database performance, eWeek and PCMag just released some benchmarking on Oracle 9i, MySql 4.0.1 Max, MS SQL 2000, DB2, and Sybase's ASE. in an article Here.
Test included transactions, load testing, simultaneous requests, and more. Interesting article.
Spoiler::
MySql shone brightly... "Overall, Oracle9i and MySQL had the best performance and scalability, with Oracle9i just very slightly ahead of MySQL for most of the run." Plus, some tidbits and suggestions on tuning the brands they tested. HTH | [reply] |
|
|
Those benchmarks look pretty dodgy to me. Whenever I see these graphs where two products follow the EXACT same line on a graph (in this case MySQL and Oracle), then I get suspicious - it basically means that the database is running at the fundamental limits of "something", be it the OS or the driver or the technology being used. That's not necessarily a bad thing, but it tells you nothing about which is faster - MySQL or Oracle. For that you need the database itself be the limiting factor, and it's obviously not in this test.
| [reply] |
|
|
Another issue for that particular test is that it seems that the JDBC drivers used played a big role. So while the app worked well for Oracle and MySQL it wasn't well suited for Sybase (Sybase isn't very good at handling cursors, because in general you don't need them...)
Michael
| [reply] |
Re: (Alternative viewpoint) Perl/CGI + MySQL: How many calls is too many?
by Ryszard (Priest) on Feb 28, 2002 at 04:12 UTC
|
How about "things start to be come messy when it become difficult to maintain and document your code?"
IMO if you're not facing perf issues (altho' you i think you infer that with the comment about placeholders), things tend to dive faster into "kaos" the more complex it becomes.
For example if you're making heaps of calls because the information in your db is poorly normalised that would be messy in my book.
Another example, if you're making heaps of calls because youre trying to fit too much information on on page. That is a messy application design. In this case, I would tend to break the site into categories and have each category in seperate sections to dilute the content a touch.
If your application is getting too big for one developer, and you only have budget for one you could heading into "messy" land. I say this because once you have a system so big, bits of it will be forgotten, and when something changes, something breaks, it then takes time (and cashola) to track down the problems which can further compound other issues you may be facing such as time to market, and lead to resource contention.
If youre redesigning your site, dont forget most users dont like things more than about 3 clicks away.
Cut off? purely arbitraty depending on your application design, info system back end, hardware, performance, et al. Take a look at sites that have lots of aggregated content, (such as yahoo) and copy the design if you need to. | [reply] |
Re: Perl/CGI + MySQL: How many calls is too many?
by mpeppler (Vicar) on Feb 27, 2002 at 22:22 UTC
|
The answer for this sort of question is usually "it depends".
In general the fewer database requests the better, but you should not necessarily focus on the number of requests but rather on the time it takes for each of those requests to complete.
Michael
| [reply] |
Re: Perl/CGI + MySQL: How many calls is too many?
by Anonymous Monk on Feb 28, 2002 at 02:43 UTC
|
First a few questions.
a) How long can your users wait for data before getting fed up
b) How many concurrent users
c) How many transactions can your database support per second
The answer then becomes clear the number of transactions
per page d is too many when b*d > c/a
| [reply] |
Re: Perl/CGI + MySQL: How many calls is too many?
by pizza_milkshake (Monk) on Mar 01, 2002 at 00:51 UTC
|
I find it hard to imagine any scenario where you'd need to run 15 selects in one page... | [reply] |
|
|
It's not 15, but Perlmonks has between 4 and 10 selects per page (depending on the number of nodelets you have). Huh ...
------
We are the carpenters and bricklayers of the Information Age.
Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose
I shouldn't have to say this, but any code, unless otherwise stated, is untested
| [reply] |
Re: Perl/CGI + MySQL: How many calls is too many?
by mattr (Curate) on Mar 01, 2002 at 04:48 UTC
|
I can give you my own experience.. for a community site I built, that would have been too much. What are all those selects and updates for?
Some table types penalize you less if you are doing selects and inserts at the same time.
You might want to see if you can apply GROUP BY to reduce the number of selects you are doing.. The presorted output is much faster to access within Perl. I got 20x-30x efficiency improvement for a spreadsheet report generator.
If you can make more complex data objects and maybe serialize you might be able to get away with a just a couple of statements. Perhaps if you post your data structure and what you need to do you could get more help.
| [reply] |
|
|
In MySQL, in my current project, it was the Group by's that were killing me. It's much faster to just slurp all of that information into a hash ref and do the grouping in Perl. The same goes for SUM and COUNT calls. If you are doing more than one or two of these calls, you pretty much NEED to pull that data into Perl and do the additions there.
oakbox
| [reply] |