kiat has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks,

This is off topic so please bear with me.

I've some questions regardng the efficiceny of MySQL performance.

1) All other things being constant, does having lots of tables in one single database slow down my Mysql's performance?

2) All other things being constant, does efficiency or speed suffer when you have millions of rows of data in a table than when you have only a couple of hundreds?

3) All other things being constant, does efficiency or speed suffer when you have lots of columns per table than when you have lots of smaller tables?

Any advice or feedback will be very much appreciated.

Thanks in anticipation :)

Replies are listed 'Best First'.
Re: [OT]Efficiency questions on MySQL
by chromatic (Archbishop) on Aug 18, 2004 at 04:03 UTC

    I'll try to answer these in order.

    • It depends. How many is "lots"? How slow is "slow"?
    • It depends. What do you mean by "suffer"? Do you need those millions of rows? How are your indexes? What does the query plan look like?
    • It depends. How does the query plan look? Are you joining the tables on primary keys, or at least indexed columns?

    Now I have one question for you. Have you found your application to be too slow, profiled it, and noticed that after tuning MySQL appropriately and normalizing your tables, it's still too slow?

      Thanks, chromatic!

      I haven't done any of those - things such as profiling and tuning.

      I have been adding more and more tables to the same database and those questions came to mind.

        The MySQL developers run a couple of large databases (millions of rows). At work, we run a couple of complex databases (several dozen tables). I expect that with tuning and decent hardware, the upper limits are in the billions of rows, hundreds of tables, and multi-gigabytes of data.

        If this is all one system and if the data belongs in the database, put it there. Keep an eye on performance and profile when you must, but otherwise, you should be okay.

Re: [OT]Efficiency questions on MySQL
by jaco (Pilgrim) on Aug 18, 2004 at 14:24 UTC
    I have to say that, for the most part, it's all relative. It truely depends on
    the operations you're doing on the table.

    if you have a table with 1.6 million rows and you try and insert to it, you should
    see no hit in the speed. I've seen my server hit upwards of 200 inserts per second
    with no negative impact.

    On the other hand, if you're doing massive select left join on non indexed tables.
    You will see it slow down as the size grows. Again though, it's relative. It's still
    going to be faster then anything DBD::CSV could do. Though normal basic selects should be ok.

    just as an example.

    i have one table called "raw_test_results".

    It's currently 5.8 gigs of data, and 1.8 gigs of indexs (7.6GB total size).
    That's 43,973,656 rows x 36 cols.

    I currenty insert to it at 64 inserts per second. However, crunching the numbers
    down with a leftjoin takes well over a minute. When the project started it would take well under a second.
      Thanks, jaco!

      That was enlightening :)

        Just to throw my own two cents in (and start repaying the debt of gratitude i owe for the help i've just received :)) also be aware that the number of tables that you can have in a database is only limited by your OS and file system. Aparently it gets a little slowerwhen you are approaching 10000 tables (becuase of the way they are stored as files on the HD). Also, on a FAT32 partition the size is limited to 2Gb