Although I have been using perl for a number of years now I haven't used any database packages. After reading about persistent data in "Mastering Perl" I decide to use DBM::Deep and compare it with Storable. In the following code my hash has about 350 elements each being a reference to an anonymous array of 13 values (file stats, extension, etc). I would estimate that there is 150-200 bytes of data per hash element. I sort the hash (to make the $db->import faster) and then time both a save by DBM::Deep and by Storable.
my $db=DBM::Deep->new(file=>"c:/testpad/collector.Deep",autoflush=>0); my %hash_sorted=sort %hash; my $st=time; $db->import(\%hash_sorted); my $et=time; my $deltatime=$et-$st; say "Time required to write hash using DBM-Deep: $deltatime secs"; $st=time; store(\%hash_sorted,"c:/testpad/collector.storable"); $et=time; $deltatime=$et-$st; say "Time required to write hash using Storable: $deltatime secs";
The result is as follows:
Storable: under a second elapse time and a file size of 36.9K
DBM::Deep: 15 secs and a file size of 4.0 megs
The overhead in cpu and file size with DBM::Deep seems excessive to me. Any comments? Is this what to expect from DBM::Deep or any other DB package? Are there options I have missed?
I would appreciate any insight you can provide.
In reply to DBM::Deep overhead by perlpipe
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |