in reply to persistent cache using Cache::FileCache

Why are databases not an option, speed?

You might want to try memcached frontending a database. That should be very fast and very scalable.

  • Comment on Re: persistent cache using Cache::FileCache

Replies are listed 'Best First'.
Re^2: persistent cache using Cache::FileCache
by perrin (Chancellor) on Nov 05, 2004 at 21:38 UTC
    It is very scalable, but it's not as fast as BerkleyDB or Cache::FastMmap for local storage. It's also not really faster than MySQL for simple primary key lookups.
      Hmmm, have you run benchmarks? For reads, it's going to be "fast": faster, I'd expect, than BerkleyDB, and more or less as fast as MySQL if indeed the queries are for simple indexed keys, significantly faster otherwise. I don't know anything about Cache::FastMap. (But I'll read up on it; sounds interesting!)

      If you have lots of RAM, and your data is big, too, memcached will outperform MySQL because you can run several daemons and circumvent the process size limit problem. (Talking 32-bit here.)

        I've run benchmarks -- I wouldn't make statements like that without trying it. BerkeleyDB is insanely fast. You can run Rob Mueller's benchmark yourself and see what you think, although it doesn't currently include a memcached comparison. It makes sense that it would be faster than Memcached, since it has no network overhead or separate server process and can cache things in shared memory.

        Striping your data across multiple machines is an advantage of memcached, although it doesn't seem relevant to this particular person's needs.