in reply to Hash space/ time tradeoff
It depends on the distribution of your key values. If there are many duplicated values for an attribute, you may save a good deal of memory by making a hash level just for that attribute. However, each hash and hash entry will have some overhead. So if the attributes are largely unique, splitting the hash into multiple levels will incur a bit more overhead than simply concatenating the attributes together as you're doing.
I'd check the distribution of attr1 first. It's long (averaging 17K per entry[1]), so the potential for savings is pretty good. If there's enough duplication, then go ahead and split out the hash level. As far as access time goes, I wouldn't worry about it until/unless the code is too slow for your purposes. Hashes are pretty quick for access, so you'd usually need to do *quite* a few of them for the access time to start being noticeable. If your code is too slow for your purposes, we'd have to know a bit more about your situation before offering useful advice.
Note: Yes, I'm assuming that the length of your attr1 is distributed uniformly, and that's not necessarily the case. Even 200 bytes per key can give you some savings, though, so I'm not too worried about the distribution of the key lengths, just whether there's enough duplication to allow a savings of space or not.
...roboticus
When your only tool is a hammer, all problems look like your thumb.
|
|---|