in reply to Caching DBM hash tie

Take a step back and think about how much time you are willing to invest in plugging the hole vs replacing the dam. I understand that the DBM -> SQL jump seems like a long journey -- but it may well be worth it, now. Instead of looking to extend tied DBM with delayed writes etc, think about how you could scope out DBI access and provide a hash like behavior that you can s/newfunc/oldhash/g in your code. Once the speed issue is resolved then go back and make the apps core more fitting to a true SQL app. IMHO you will end up better in the long run and maybe even spend about the same amount of time with the current fix.

-Waswas

Replies are listed 'Best First'.
Re: Re: Caching DBM hash tie
by Tardis (Pilgrim) on Jul 16, 2003 at 01:12 UTC
    It's actually not really that much of a bandaid, when you think about it.

    It's a general purpose extension to a simple tie that provides SQL-like concurrency to a standard hash mechanism.

    For what it's worth, the code to convert the app to a half-baked SQL solution is actually written. There is just a fairly high scare factor in actually using it.

    Current code uses the ability to lock the entire database as a way of ensuring data integrity (say during financial operations). That needs to still happen, but if we left all the locks in as they were we'd gain no benefit from having SQL.

    The solution was to use PostgreSQL's SELECT ... FOR UPDATE in appropriate places, along with SET TRANSACTION SERIALIZABLE to lock rows we are about to (or may be about to) change.

    All of these things mean API changes and possibly data integrity issues if not done correctly.

    Breaking the API as little as possible is a real issue here.

      You may be able to get some performace boost with MLDBM::Sync it allows you to batch lock and cache. But any row level locking you try to acomplish still has the same problems you describe above...

      -Waswas