Hi all,
I have a quick question about the best way to cache a bunch of data for a program I'm making. The data will be cached is roughly 15 KB per entry (a web-page). I am not sure yet how long it will need to be cached. Maybe one day, maybe one month. There will be at least 1000 entries per day.
What I am wondering, is if it will be faster in the long-run to store the data in MySql table and query it each time I need the data with DBI, or in regular files on the server, and slurp them in each time I need the data. Which takes longer for perl? And what about checking ffor the existence of a cached file? Is it quicker to check for a file with -e or to query the database only to find it empty? What if there are x number of users searching at once?
I have no idea why I think this, but for some reason it seems that MySql would be faster unless I find i have to cache the data longer than I expected and the table just gets huge... in that case would it be faster to store it all in files, or would proper indexing on the tables be fine?
In reply to quickest way to access cached data? by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |