in reply to Re^3: persistent cache using Cache::FileCache
in thread persistent cache using Cache::FileCache

I've run benchmarks -- I wouldn't make statements like that without trying it. BerkeleyDB is insanely fast. You can run Rob Mueller's benchmark yourself and see what you think, although it doesn't currently include a memcached comparison. It makes sense that it would be faster than Memcached, since it has no network overhead or separate server process and can cache things in shared memory.

Striping your data across multiple machines is an advantage of memcached, although it doesn't seem relevant to this particular person's needs.

  • Comment on Re^4: persistent cache using Cache::FileCache

Replies are listed 'Best First'.
Re^5: persistent cache using Cache::FileCache
by gaal (Parson) on Nov 06, 2004 at 08:03 UTC
    Thanks for the very interesting benchmarks. I didn't realize BerkeleyDB was so fast.

    If you mentioned striping in response to my point about huge data, then note that you can get the benefits even on a single machine. If, for example, you have 8GB of RAM, you can run several memcached instances on the same machine, all under 2GB in size, but still have an effective cache size that is close to your potential maximum. Of course in a few years when 64-bit machines become common this advantage goes away, but in the meanwhile memcached works around it pretty much transparently.

    But yes, this is not necessarily as useful for the OP's needs as it was for the designers of memcached.

      How can you get a 32-bit machine to even recognize 8GB of RAM? I didn't think individual process size was the problem there. Are you talking about using PAE?
        Linux supports up to 64GB RAM on 32-bit machines (CONFIG_NOHIGHMEM and nearby options). Not an expert here, but yes, the docs do say you need PAE. A single process won't be able to address all of it at once, but using memcached that doesn't change the usage semantics.