in reply to Re: Caching DB rows transparently and with SQL search
in thread Caching DB rows transparently and with SQL search

Yes, because at the moment I supply a hash of conditions and attributes for the DB search. You can't memoize with a nested hash, as input, effectively can you? I mean you can compare deeply but is that more efficient than hitting the DB?

At a lower level (say, subclassing DBIx::Class's search as the link you cited suggests), it should be able to distinguish equivalent SQL even if they seem different (order, temp table names).

But I now see clearer that it helps to constraint the searches and abstracting them before they are turned into SQL, before they even hit DBIx::Class will be much more efficient but creates a parallel universe of code in the app.

  • Comment on Re^2: Caching DB rows transparently and with SQL search

Replies are listed 'Best First'.
Re^3: Caching DB rows transparently and with SQL search
by LanX (Saint) on Jul 04, 2020 at 12:52 UTC
    I have no DBIC expertise, sorry.

    Hopefully one of the grandees will answer here, I've send a PM to one of them.

    > You can't memoize with a nested hash, as input, effectively can you?

    Dunno!

    Well the naive approach is to stringify the nested hash with Data::Dump or similar and to use it as a hash key.

    But this will lead - like i said - to a lot of redundant data and you might need to free memory from time to time.

    > I mean you can compare deeply but is that more efficient than hitting the DB?

    IMHO only if you maintain index tables for key columns in hashes.

    Like an AND condition being a hash slice of the two %indicies. OR a join of both %indices

    But I'd guess there are already XS modules available offering in-memory SQL?

    Does SQLite always operate on the filesystem?

    Sorry im guessing here, I'm more an SQL user lacking deep knowledge.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery