Nice talk.
Its actually been a while since I last watched it through. Probably the last time was when I last mentioned it here; and the first time a year or so before that.
I do remember that he goes through it at a lick -- thank dog for pause and rewind -- but for me that is preferable to those that labour each point.
What I consider the most significant point in the talk was that he reinforced my long held stance that the thing that screws up most concurrent programming is that people are always trying to lock step data between threads and use far too much locking. Far more than is necessary most of the time.
He almost skips over -- certainly deemphasizes -- assuming correct atomicity of writes to shared data, when one thread reads a shared value -- it is "the latest" available; it might change a microsecond after, or might have changed a microsecond before; but when you read it, it is the latest value. So do what you need to with it and then try and put it back. If its changed in the meantime, do it again.
I agree that is very similar to RCU; and for individual updates to values; it works fine.
The reason I've rejected it for the iterator problem is that it isn't individual values changing that is the big danger; it is the state of the entire structure changing -- ie. resizes. In order to use RCU, the iterator would have to take a complete copy of the hash in order to be able to provide the guarantees you need from an iterator. Ie. That all ((still) existing) keys will be visited once; and only once.
In Cliff Click's scheme, he creates the resized hash and keeps the old one around; and values propagate from the old to the new piece meal as they are accessed. But he only makes the same guarantees for iterators as the original ConcurrentHashMap; which are basically very few and very weak.
the multiple (cyclic) generation counts in a single 64-bit word that you mentioned would likely be a bottleneck for huge numbers of threads.
I'm hoping that will not be the case. The reason for wanting to restrict the overall version to a 64-bit value is because that is the largest size for which I can guarantee to be able to deal with using an atomic instructions:
.__int64 _InterlockedCompareExchange64(__int64 volatile *,__int64,__int +64)
There is also an atomic instruction for incrementing 16-bit fields:
short _InterlockedIncrement16(short volatile *Addend)
which I believe will mean that my version counting and comparing should be both lock and wait free thus should not become a bottleneck.
In reply to Re^4: Design thoughts: iterator invalidation (links)
by BrowserUk
in thread Design thoughts: iterator invalidation
by BrowserUk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |