I've never seen the behavior you're describing. It should not remove the files unless you intentionally clear the cache. However, Cache::FileCache is not very fast. Using BerkeleyDB or Cache::FastMmap or even DBD::mysql is much faster. | [reply] |
You may find DBM::Deep of interest. Its similar to Cache::FileCache but can also provide the "persistance" that you are looking for(as long as you don't call clear()) as well as locking.
-Nitrox | [reply] [d/l] |
We use Cache::FileCache extensively (HTML::Mason has hooks built-in to use it) and I've never experienced the probably you're describing of the cache file disappearing. I just tried this test code and each time I run the script, I get the PID of the previous process out of the cache:
#!/usr/bin/perl
use strict;
use Cache::FileCache;
my $c = new Cache::FileCache({'namespace'=>"foo"});
warn "Previous PID: " . $c->get("pid");
$c->set("pid"=>$$);
warn "This PID: " . $c->get("pid");
| [reply] [d/l] |
| [reply] |
It is very scalable, but it's not as fast as BerkleyDB or Cache::FastMmap for local storage. It's also not really faster than MySQL for simple primary key lookups.
| [reply] |
Hmmm, have you run benchmarks? For reads, it's going to be "fast": faster, I'd expect, than BerkleyDB, and more or less as fast as MySQL if indeed the queries are for simple indexed keys, significantly faster otherwise. I don't know anything about Cache::FastMap. (But I'll read up on it; sounds interesting!)
If you have lots of RAM, and your data is big, too, memcached will outperform MySQL because you can run several daemons and circumvent the process size limit problem. (Talking 32-bit here.)
| [reply] |
I haven't tried it, but would IPC::SharedCache meet your needs? Sounds like it allows for caching between processes using cache in memory.
| [reply] |
It has poor performance. Much worse than Cache::FileCache.
| [reply] |