http://qs1969.pair.com?node_id=309017


in reply to A (memory) poor man's <strike>hash</strike> lookup table.

First an important correction. The amount of memory taken to store a hash varies widely depending on how you do it. On 5.8, initializing a large hash with undef $foo{$_} for 1..1_000_000; takes about 69 MB. Initializing with undef @foo{1..1_000_000}; runs faster but takes about 130 MB. (This is run on Perl 5.8.1 on Linux, YMMV.) I'm guessing that the difference is how much buffering I am doing.

The second tip. If you are doing to use dbm with a lot of data, always, always, always use a B_TREE. On a faster machine than yours (about a factor of 10), my insert time was 12 seconds. So that is 2 minutes. See the docs for a better overview and if you need to tune it more, see here in addition.

Now I'll grant you that walking keys close to in order is pretty much a best case scenario, but very few real-world scenarios don't have considerable locality of reference, so B_TREE accesses tend to win big on datasets of under a few hundred million.

And this leads to a general note. Hashes are probably the fastest general-purpose access method when you have a flat memory model. And they serve Perl very well. But the real world is moving more and more towards multiple tiers of data storage with very different speeds between them. (In decreasing order of speed: on CPU we have level 1, 2, and 3 caches, followed by main RAM, NUMA adds a few layers here, then we have local hard drives, attached storage devices, network drives, then stuff remote to the LAN.) Given differentials in Moore's law, this trend is likely to continue growing in importance as the ratios between the speeds of these layers get more and more extreme, making alternatives to hashing make more and more sense.