Ah, thanks. I see now how I misparsed your description.
That is a better approach than the one I incorrectly thought you had originally described. Just to be clear, this (better) method also doesn't remove the race condition.
And I still would rather not have the rush of multiple readers trying to repopulate the cache for a short period after each update. But then, I also don't have a different data structure for "display thread" separate from "update node" like you do.
In your situation, I would prefer to have the redirect after the update be flagged as "please refresh the cache" so other readers aren't forced to hit the DB. But that presents two problems since the redirect is surely external. So, in the end, the simplest approach in your situation is your approach and I would end up using that or something very close to it.
I personally think it will be a great help to cache. How precisely it is done is probably not important.
We already cache and in more than one way. Just not in the way you propose.
It seems like you might think that memcached will be a big performance win because it removes the need for the versions table. Well, I don't have to guess wildly, since I've looked into how resources are actually being used. And I wouldn't remove the versions table because that would mean that memcached failing would leave the site with no node cache and it comes to a crawl that way (little point in the site being up in that configuration) and I haven't observed the versions table to be much of a bottleneck.
The "win" I see from memcached is, firstly, reducing the memory consumption of Apache children because they can more freely discard nodes from their per-process cache (we still need a per-process node cache as the site fundamentally works via nodes and one should try to not fetch the same node twice within a single page rendering). And I know that one of the biggest sources of "slow periods" is when one of the web servers runs too low on available memory.
Secondly, memcached provides a much more efficient mechanism for each apache process to get an updated node after an update is done. In (unrealistic) theory, a node might not be read from the DB more than once after each update.
And memcached should make a big improvement on how well the site "scales". As is, the memory requirements and DB through-put requirements incur a multiplier effect that probably means that twice the traffic requires more than twice the memory and DB throughput. Memcached should make that closer to linear.
Thanks for the discussion.