in reply to Perl and data/databases
if a perl module has to come with data to be useful, how much data should be part of the module? If the module can come with MB of data, should it?
The simplest answer to that is: just enough to make it useful and no more.
If there is a larger corpus of data that may be required by some users, consider packaging the larger subset as a separate package. eg. Xxx::Yyy comes with the minimally useful dataset. Installing Xxx::Yyy::Pqr installs the bigger dataset for those that need it.
It's easy to see that can be extended to multiple 'expansion' sets.
In terms of developing the data, I think a persistent perl structure is required. How does/should one decide between the two (DBM::Deep .v. DBM::Deep::Blue) above?
To my knowledge, DBM::Deep is Pure Perl. And runs pretty much anywhere perl does.
DBM::Deep::Blue is written in C, therefore install requires a compiler, potentially limiting your user-base. Which may or may not be a bad thing. It also requires 5.12.1 which further limits your target audience.
Most of what I am going is HoH, some is LoH. Are there any lessons on how to turn Lo* or Ho* into SQL?
Do you just want to be able to query an individual sub* by it primary key? Or do you need to subselect to a finer granularity?
|
|---|