Liebranca has asked for the wisdom of the Perl Monks concerning the following question:
Hello everyone,
My monolith makescript maker/syntax file generator/auto FFI-bindings emitter/parser/preprocessor/inliner/someday to be compiler thingy has a lot of hashes, alright. Translation tables, symbol tables, keywords organized by loose cathegories, lots of cool stuff.
Now, the data actually in use by the program is generated from perl variables that are usually hases as well; because there's some processing of these I need to do at init time I thought just start saving these things to disk before it gets too big and actually slows down startup.
I'm doing that with store/retrieve and already have a mechanism in place to either load the file if it exists and no update is needed, else regenerate. This is done automatically on INIT blocks. Looks something like this:
my $result; INIT {load_cache('name',\$result,\&generator,@data)};
^slightly abbreviated for clarity, but you get the idea. Now, this is fine but it essentially means I need a separate file for each instance of some structure, which is undesirable in my case.
I'd much rather do this per-package, or a multitude of packages even, and it wouldn't really be too difficult to implement. So what's the question? There's no question. But I'd like to request some general advice on *local* databases, meaning my own computer: no cloud, no net, no servers no mambo, I save things to disk and no one else needs to know.
See, I can not duck for "Database" and not get flooded with absolutely irrelevant results about frameworks for whatever it is modern web developers and java mongers are concerned with. It's ridiculous and it's driving me crazy.
So... tips? Conventional wisdom? Pitfalls? What to watch out for? That kind of stuff. It might be mostly just things I already know but I'd rather hear them twice than never.
Just for context, I'm on a half-burned, half-dead one decade old model two core cpu and the bigger file in this scenario is like what, 64kb. Absolutely *gargantuan* quantities of data. But I'm interested in efficiently storing this program data uncompressed so that I don't end up with a million small files that need to be read individually at startup.
Cheers, lyeb.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Big cache
by Corion (Patriarch) on Jul 29, 2022 at 05:41 UTC | |
Re: Big cache (my top ten software development practices)
by eyepopslikeamosquito (Archbishop) on Jul 29, 2022 at 08:52 UTC | |
by cavac (Prior) on Aug 02, 2022 at 12:47 UTC | |
by afoken (Chancellor) on Aug 02, 2022 at 14:50 UTC | |
Re: Big cache
by hippo (Archbishop) on Jul 29, 2022 at 09:50 UTC | |
by Discipulus (Canon) on Jul 29, 2022 at 11:35 UTC | |
by hippo (Archbishop) on Jul 29, 2022 at 12:38 UTC | |
by Anonymous Monk on Dec 25, 2022 at 14:18 UTC | |
by hippo (Archbishop) on Dec 26, 2022 at 12:34 UTC | |
| |
by 1nickt (Canon) on Jul 29, 2022 at 12:44 UTC | |
Re: Big cache -- serialization
by Discipulus (Canon) on Jul 29, 2022 at 07:13 UTC | |
Re: Big cache
by Liebranca (Acolyte) on Jul 29, 2022 at 19:11 UTC | |
Re: Big cache
by LanX (Saint) on Jul 28, 2022 at 21:47 UTC | |
by Liebranca (Acolyte) on Jul 28, 2022 at 23:08 UTC | |
by LanX (Saint) on Jul 28, 2022 at 23:30 UTC |