in reply to Re^6: Randomization as a cache clearing mechanism (races)
in thread Randomization as a cache clearing mechanism

Hm. The memcached API has five principal commands: get, add, set, replace & delete. Distinct add which fails in case the key is already present in the cache along with a deletion delay helps prevent races (not completely). I think that developers' intention is to avoid introducing locks or versions by all costs.

And yes, I wouldn't use memcached in a banking environment, it seems to be a MySQL-type product -- speed ahead of reliability.

Thanks for bringing it up. Doing things which will not do much harm to humanity in case of failure tends to shift priorities :)

Replies are listed 'Best First'.
Re^8: Randomization as a cache clearing mechanism (races)
by ryantate (Friar) on Nov 21, 2004 at 23:01 UTC
    speed ahead of reliability

    In fact, if a cache were to handle concurrency and data integrity perfectly, it would be pretty durn close to being an RDBMS, which would be much beside the point.

      Supporting optimistic locking would be easy and fast and doesn't come close to being a database (which must support pessimistic locking which is where all of the difficulty comes in).

      The changes I'm talking about just involve having certain updates immediately fail. Nearly trivial changes that fundamentally change how reliably memcached can be used.

      I believe it already has transactions for a single object always route to the same server where they are handled in a single-threaded manner. So there is very little left to fix.

      - tye        

        I'm new to the concept of optimistic locking. The delete delay in memcached is a feature of delete command which allows specifying a delay during which all further adds, gets, replaces & deletes for this particular object which had been just deleted would fail. Does it look similar?

        Anyway, I'm going to read about optimistic locks, they sound interesting.