Just tried out DBM::Deep. Easy to install, create a database, and start working. But there are a few issues which might be because I don't know enough.
Issue 1: Size -- I converted a two-column 230k row table into a DBM::Deep and a DB_File database respectively. The 14 Mb SQLite file became a 25 Mb DBM::Deep file, but shrank to a 5 Mb DB_File file.
Issue 2: Spped -- A simplistic benchmark of counting the number of records in the table gave the following
Benchmark: timing 1 iterations of DB_File, DBM::Deep, SQLite 3...
DB_File:2 wallclock secs ( 1.90 usr + 0.15 sys = 2.05 CPU) @ 0.49/s
+
DBM::Deep:93 wallclock secs (79.24 usr + 9.42 sys = 88.67 CPU) @ 0.0
+1/s
SQLite 3:0 wallclock secs ( 0.04 usr + 0.01 sys = 0.05 CPU) @ 19.61
+/s
My code was simple "SELECT COUNT(*) FROM sqlitedb" for the SQLite db, and "return scalar keys(%$db)" for the other two databases. Is this expected (in particular, is the slowness of DBM::Deep expected, or is there a better way to do this query?
--
when small people start casting long shadows, it is time to go to bed
|