in reply to DBM::Deep overhead

Most modern DBMS's are built for size rather than speed. DBM::Deep is intended simply to bring a Perl-only implentation of a DBMS into play. Storables are going to slow down faster than DBM::Deep as table sizes increase. However, one approach might be to implement a module that transparently uses multiple storables per table and has a hashing algorithm to select where to store values based on a primary key concept as well as a virtual memory architecture (simply an array of storable filenames of limited indivudial size and unlimited number per "table" to keep track of which internal references are being allowed to be kept alive by the Storable package and a policy of forcibly freezing references to minimise the number of storables being allowed to use memory - it would need to pick a victim to drop from memory everytime it needed to thaw something not currently active. Such a pseudodbms module should of course keep its interaction with Storable under the hood).

One world, one people