I would be interested to see if this makes and net gain on the PM site. I would think that many basically randomized hits against his indexed table is already performing at very close to minimal cost ( O(log n) -- it is indexed). So it would seem to only add extra housecleaning steps to the database (create cache, hook on updates/inserts in to invalidate cache, expire cache, memalloc, losing actual memory to cache indexes) for the population of a cache that by all common sense would have a low hit rate anyways (the items are atom like in nature -- too many/random to cache or cache better than the initial performance of 0(log n) ). I agree that it is an easy test to do, no actual data has to be changed. It would just be very counter intuitive to me if it did enhance performance in this case. I would think a redesign that treats the cache items in a different scope would be an area that would have more profound impact on performance. treat nodes of different types in different specialized ways with a better data structure for caching.