in reply to Re^4: A faster?, safer, user transparent, shared variable "locking" mechanism.
in thread A faster?, safer, user transparent, shared variable "locking" mechanism.
/me wonders why the Parrotistas dismissed the idea....
I'm not terribly familiar with processors other than x86 and a few 8-bitters, but I've lurked on the linux kernel mailing list for a couple of years. I've not heard of any of the locking techniques in the kernel relying on being executed in a protected context--it seems that they normally rely on atomic operations. (Though there were a couple of interesting threads a few years back that talked about "memory barriers" ... perhaps that might be related. The memory barriers were for enforcing that prior R/W operations actually made it into memory before the next series of R/W operations.)
My suspicion is that doing locking in a protected context is normally done merely to ensure that the actual mutexes/condition variables/etc. are in a protected area so an errant memcpy() can't lock everyone out of the system.
WRT NUMA, etc., I would imagine that the cache coherency in the system would protect you from most multi-cpu interactions--as long as you keep your mutex in the same cache line as the value or handle of the variable you're protecting.
I don't think it's wasted time to ruminate on these issues. It sharpens the brain for other tasks. And sometimes you come up with the right idea and make something better. If only I could find my ASM backups ... I could dig up a few routines I used in a robotics job and dust 'em off and play with 'em a bit. Ah, well...
--roboticus
|
|---|