There's a big difference between DBI and DBM::Deep though. DBI is for talking to a relational database. DBM::Deep is for storing and accessing a perl data-structure on disk. Both have situations where they are clear winners.
The difference between DBM::Deep and Storable is a bit more subtle. Both are for storing and accessing perl data-structures on disk. DBM::Deep gives you random access to that structure without having to pull it all into memory, at the expense of being quite slow for small data sets. For small structures of only 4000-ish elements like what the OP has, the overhead of DBM::Deep appears to be very large. But when you have millions of elements, you'll find that DBM::Deep is faster.
To read, change, and write an element in a Storable file, you need to read the entire file, update that one element, and write the entire file. Reading and writing tens of megabytes is slow. To read, change, and write an element in a DBM::Deep file, the size of the file is irrelevant, you just need to do a handful of small reads and seeks to find the right place, then read, and write just a few bytes. To a good approximation, you will need to read ten bytes for each level of nesting from the root of the data structure to the point which you want to edit, and have one seek per level.
In reply to Re^2: DBM::Deep overhead
by DrHyde
in thread DBM::Deep overhead
by perlpipe
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |