in reply to MLDBM Efficiency
But once the files were built, retrieval was never a problem -- retrieval time does not increase noticeably with the amount of data being stored (in terms of number of keys or overall content). That's the whole point of DBM files.
(I think the problem with building the DBM file has to do with how often, and in what manner, the tool has to rebuild its hashing table -- the indexes used for seeking to the data for a given key -- as more keys are added. The DBM flavors that keep the hash index and data in separate files are likely to be faster than those that create only a single file with indexes and data combined.)
So if growing the database will be an ongoing activity in your app (such that users would be bothered if there were a 2-minute lag when they hit a button to add data), you'll want to test MLDBM in terms of how long it takes to add increments of, say, 5K records at a time, and look out for either occasional or consistent delays in each increment.
An adequate test would be to generate an appropriate number of "random" strings with suitable length, maybe using MD5 checksums as the keys or something like that.
|
|---|