QM has asked for the wisdom of the Perl Monks concerning the following question:
# max_buckets: the number of entries that can be added before a reindexing, [16..256], default 16.Assume I have a representative sample of the data that would be stored. I want to choose parameter values to optimize the speed and diskspace performance. I would prioritize speed, but I would give up a few percentage points for a big improvement in disk space.# data_sector_size: the size in bytes of a given data sector. [32..256], default 64.
I have a script that will read several GB of input data and create/update the hash/db file, then output some of the hash data in table format. Processing time can take several days, and the db file size can be several GB also. Space isn't too restrictive, but obviously I want to take up less rather than more.
Is there some intelligent approach to optimizing these values, such as by inspection of my sample data? For example, what if I knew what the mean, median, or mode strings for keys and values were? Or the mean/median/mode hash depth? Note that the hash structure and contents depend on command line options (given the same input), so in some cases I would want the D::D parameters to change also. [I know that once the D::D file is written, these parameters can't be changed.]
Or should I just run benchmarks on all (power of two?) combinations of these parameters?
-QM
--
Quantum Mechanics: The dreams stuff is made of
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Optimizing DBM::Deep file parameters
by tilly (Archbishop) on Aug 29, 2008 at 16:33 UTC | |
by QM (Parson) on Aug 29, 2008 at 20:49 UTC | |
by tilly (Archbishop) on Aug 29, 2008 at 22:39 UTC | |
|
Re: Optimizing DBM::Deep file parameters
by jettero (Monsignor) on Aug 29, 2008 at 13:49 UTC | |
|
Re: Optimizing DBM::Deep file parameters
by dragonchild (Archbishop) on Aug 31, 2008 at 01:32 UTC |