in reply to read a huge mysql Table into memory

There is indeed some overhead in Perl-arrays, so it is correct that the array will take more space in memory than in the MySQL-table.

But 10 queries per second is not much and MySQL should be able to keep up with that number of requests, assuming that the record ID you search on is indexed.

Another thing to consider: as you are speaking of web-pages, I assume your script will run on the web-server. Is it a CGI-script or does it run under mod_perl? If it is a simple CGI-script, you will have to build your array for every request and that will take ages. Under mod_perl you will only pay that start-up cost once, but I still think a SELECT into your data-table will be the best solution.

CountZero

A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

  • Comment on Re: read a huge mysql Table into memory

Replies are listed 'Best First'.
Re^2: read a huge mysql Table into memory
by hiX0r (Acolyte) on May 30, 2009 at 18:01 UTC
    sorry was not precise: 10/s pagehits each containing several (different pages with different tasks but a good average would be around 15 ) mysql tasks so we talk more in the range of 150 to 200... and then see my reply above, speed is what counts :)
      That should still be within the possibilities of MySQL.

      CountZero

      A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James