in reply to MLDBM Efficiency

I can't speak from personal experience on MLDBM, but based on related experience I've had with DBM-based stuff, I would say "it depends". It's been a couple years since I've done anything myself with DBM files, so things may have changed in the course of various upgrades, but I do recall that one or another DBM package (probably the Gnu one) had a problem when it came to building really large DBM files.

But once the files were built, retrieval was never a problem -- retrieval time does not increase noticeably with the amount of data being stored (in terms of number of keys or overall content). That's the whole point of DBM files.

(I think the problem with building the DBM file has to do with how often, and in what manner, the tool has to rebuild its hashing table -- the indexes used for seeking to the data for a given key -- as more keys are added. The DBM flavors that keep the hash index and data in separate files are likely to be faster than those that create only a single file with indexes and data combined.)

So if growing the database will be an ongoing activity in your app (such that users would be bothered if there were a 2-minute lag when they hit a button to add data), you'll want to test MLDBM in terms of how long it takes to add increments of, say, 5K records at a time, and look out for either occasional or consistent delays in each increment.

An adequate test would be to generate an appropriate number of "random" strings with suitable length, maybe using MD5 checksums as the keys or something like that.