My first version was very simple: just run the query each time, and return the first row. This has two problems: it's slow, and two request can get the same result back if accessed simultenously.
My second version built an cache (an array of records) in shared memory using IPC::Shareable, so each time I just shift the array, and query once when the cache is empty. This has been working fine for awhile. But I didn't like the shared memory solution, one reason is that my test script always run out of semaphores (ok in deployment since there were only a few apache processes), the other reason is that I'd like to have other apps, probably from a different machine, to have access to the same data.
So I'm working on my third solution: to create a separate TCP server, that serves as a global cache. I'm avoiding using SOAP-like mechanism, since that's not fast enough for me. I'd like a client (say the web app) to keep the TCP connection open. This seems to be working fine now, based on my limited test scripts.
I guess my question is to ask your thoughts/comments on the whole task/solution, and whether there already exists such a solution there (too late for that? I'm willing to throw my code anytime there is a better one!) Thanks.
In reply to A database caching scheme by johnnywang
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |