In my ADVInstaller Project, I have a similar problem. I've got several slow moving database tables, that (without optimization) need to be queried on each call for each child. That's throwing CPU time out of the window, of course.
Now this in work in progress, but here's my approach (which might not be applicable to your project):
First, I have the database team apply a simple trigger procedure on each of those seldom changing tables, that sets a timestamp in a 'last_modification' table.
Now on each call I query this table (uncached, of course ;) and fetch the mod_tst's for each table. Note that this query is peanuts (20 rows) compared to the about 20 fat queries that follow it (or don't when our cache is up-to-date). Now these cache is stored in IPC shared mem, and due available for all server children.
This is perfectly transparent to my "root code". I create a cache object (but you could also use a functional approach), which decides whether a given query needs to be refetched or can be served from the cache. The nice thing is that the cache can be as abstract as I like. One cache update can trigger another update for a differnt query in case they're related, I can even provide "cache" views one query is, say, actually a subset of some other query.
Note that there's also no "master" process or something alike that that does the cache management. Any child process updates for all other children (think musketeers ;). You might argue now that this means that any child could (b)lock other children -- thats true. Yet, in my current implementation all those queries would always go to the database for each call on each child. That worse, by far... ;)
I wonder (and thought about doing it myself) whether it's possible to have some kind of DBI::Cacheing which looks like plain DBI, but keeps result sets in IPC shared mem (or even in "normal" mem for single process clients).
The freshness determination would be based on some callback subroutines (like querying a timestamp table in my case).
So long,
Flexx
|