![]() |
|
P is for Practical | |
PerlMonks |
Re^7: A memory efficient hash, trading off speed - does it already exist?by Aristotle (Chancellor) |
on Feb 07, 2003 at 20:29 UTC ( #233572=note: print w/replies, xml ) | Need Help?? |
Good question. My proposal is a single hash only too though. You can store the keyword -> symbollist and the keyword+symbol -> weighting entries in the same hash, provided your keywords never have a \0, but in that case the keyword+symbol or symol+weight packing will break as well anyway. Obviously though, if you split the symbol string anyway, interleaving the weights into the same string comes at very little extra cost. In that case, my proposal doesn't really have a lot of merrit. It will shine when you frequently know the symbol you're looking for beforhand and can thus skip the splitting step - then you can look up the weight directly. Looking up the same symbols frequently actually would be an argument in favour of this approach, as its key/value pairs constitute a much smaller amount of data each, allowing the OS and/or DBM drives to keep a lot more individual entries in their respective caches. It will still take up considerably more total memory than your approach (infrastructure for one extra key per symbol, plus the size of an extra copy of the keyword and symbol to store as the key) - but in the case of infrequent accesses to the symbol list key it actually works in favour of caching. Another option to consider if you need to discover possible values might be BerkeleyDB's B-tree mode, which lets you store multiple values with identical keys and then query them in sequence. It's usually more expensive and less efficient than hash mode though, so you might do well to do something like use 53 DBMs keyed on the first character of the keyword (one for each letter, case sensitive, and one for non-letter characters) to split up the expense among individually more manageable data sets. Makeshifts last the longest.
In Section
Seekers of Perl Wisdom
|
|