in reply to DBM::Deep overhead
DBM::Deep is approximately creating a full-service database; one that you could query, edit individual entries, and do just about anything you could do with a database while using the data-structure motif. That carries with it overhead in both file size and time. Storable is creating a freeze-dried data dump that can be reconstituted quickly and easily. But when it's in storage, it's not useful. So there is no need for any sort of elaborate framework to make the data accessible as it would be in a database.
In your test of approximately 37K of data, Storable is the hands-down winner. If your objective is to just freeze your data in time, and thaw it later, Storable has performance advantages. If you prefer to interact with the stored data, the database route could become higher performance, as it would allow for reads and edits of individual elements without rewriting the entire datastructure each time. It's all in how you plan to use it.
By the way: sorting your hash, as in my %hash_sorted=sort %hash; (line two of your posted snippet) is not helpful. First, hashes have no implicit order, so sorting them is useless. Second, providing a hash to a list function such as sort will send a flat list of key, value, key, value, key, value to sort. The sorted output will be a flat list which may well be key, key, value, key, value, value (in other words, your keys and values will get all jumbled up). Then that sorted list gets sent back into a hash. Any values that happened to suddenly become keys will now need to be unique. Any non-unique values that became keys will result in some of the values simply being dropped. Sorting a hash will make a big mess of it.
Dave
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: DBM::Deep overhead
by perlpipe (Acolyte) on Apr 21, 2011 at 02:27 UTC |