To some degree, I think I can sort the data before writing to the hash/file. However, there are several points that will hinder that:
1) The hash is multilevel. 99% of the data inserts would use the first example:
Would this be a hurdle?$hash{DATA}{k1}{k2}{k3}{k4}{k5} = value1; $hash{DATA}{k1}{k2}{k3}{k4}{h1}{h2} = value2; $hash{HEADER1}{h1}{h2} = value3; $hash{HEADER2}{j1} = value4;
2) In some cases it's possible to know a lot about the contents of an input file based on it's name and path. But not all. Input data files could be sorted according to the apparent contents.
Would it then make sense to use a %temp_hash for each input file, and then import to the %tied_hash at the end of each file?
-QM
--
Quantum Mechanics: The dreams stuff is made of
In reply to Re^2: Optimizing DBM::Deep file parameters
by QM
in thread Optimizing DBM::Deep file parameters
by QM
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |