in reply to Working with large number of small constantly updated records

Cache::Memcached?

I've seen people do some really clever things by calling sql functions that try the cache first, if not there, then fall back on DBI. Then you need a protocol for dropping cache entries (or deleting them from some other node) when things go stale.

-Paul

  • Comment on Re: Working with large number of small constantly updated records

Replies are listed 'Best First'.
Re^2: Working with large number of small constantly updated records
by techcode (Hermit) on Apr 27, 2009 at 20:33 UTC

    Thanks for suggestion but I'm not sure how would Memcache fit in since:

    If you have a high-traffic site that is dynamically generated with a high database load that contains mostly read threads then memcached can help lighten the load on your database.
    Though there is one part of the system where caching could help - in cases where we recently did some of those remote stuff (DNS/HTTP) so we wouldn't need to to it again if it's in cache.

    The thing here is that there are a lot of updates - and it is very important to maintain in what "state" the processing of a record is. Everything is payed per record by both our clients to us and us to business associates :)


    Have you tried freelancing/outsourcing? Check out Scriptlance - I work there since 2003. For more info about Scriptlance and freelancing in general check out my home node.