# max_buckets: the number of entries that can be added before a reindexing, [16..256], default 16.Assume I have a representative sample of the data that would be stored. I want to choose parameter values to optimize the speed and diskspace performance. I would prioritize speed, but I would give up a few percentage points for a big improvement in disk space.# data_sector_size: the size in bytes of a given data sector. [32..256], default 64.
I have a script that will read several GB of input data and create/update the hash/db file, then output some of the hash data in table format. Processing time can take several days, and the db file size can be several GB also. Space isn't too restrictive, but obviously I want to take up less rather than more.
Is there some intelligent approach to optimizing these values, such as by inspection of my sample data? For example, what if I knew what the mean, median, or mode strings for keys and values were? Or the mean/median/mode hash depth? Note that the hash structure and contents depend on command line options (given the same input), so in some cases I would want the D::D parameters to change also. [I know that once the D::D file is written, these parameters can't be changed.]
Or should I just run benchmarks on all (power of two?) combinations of these parameters?
-QM
--
Quantum Mechanics: The dreams stuff is made of
In reply to Optimizing DBM::Deep file parameters by QM
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |