Isn't this the same issue like for reading data from database concurrently? Would you want to introduce transactions and commits into your scheme?
Yes. Same issue, the whole start/end transaction, commit/rollback is a very heavyweight solution for an in-memory problem.
Often referred to as Software Transactional Memory, a seductive term that is actually just a whole lot of smoke and mirrors around what comes down to:
- take a copy of the data you want to modify;
- make a checksum of the original;
- modify the copy;
- recalculate the checksum of the original;
- if the checksum has changed, goto step 1.
- copy the modified copy over the original.
Now think about the locking required when performing steps 1, 2, 4, 6; and ask yourself; wouldn't it have been quicker to just: lock the data; modify it; unlock it.
some of your issues can occur already when you have a single iterator over a hash like when using each. Or not?
Indeed. All of them. In fact, I seem to recall that someone once meditated on the idea that each should be deprecated because of it. (I did try to find it and failed; though I did turn up the one where the guy wanted to deprecate if in OO code! Reckoned it could all be done with subclassing and overrides as I recall, though I didn't reread it. :)
But the problem becomes exponentially worse when it's not just code/subroutines you call from your iterating loop that might (deterministically) undo you, but any other thread with visibility of the hash, and with non-deterministic timing. Hence I feel that it warrants some effort.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
|