Well according to
TFM, you hand the handle a directory on init. It will map tables to files in that dir of the form
${table_name}.csv. So If you already have the csvs, simply drop em' in an empty directory and get crackin'.
However, I wouldn't keep my tables by date since that would cause many table cross-references upon access, and hence file operations, to follow one boat (if you want to know where all the boats are at once, on the other hand, this would be a good choice).
In stead I'd keep them by boat. That means one table/file per boat. It makes more sense to me. The compromise is that you loose append/delete performance for a gain in access performance (which do you do more of?).
That said, I wouldn't handle the append/delete with DBI. Its a CSV file. Append the new records to the end of the files (very cheap) and drop old entries from the top when the time comes.
I guess I have fundamental issues using DBI for this. You have a dataset that is only really useful in an
ordered way, that comes in an
ordered sequence (easy pickings), and you want to store it using a protocol that makes
no guarantees about order? Something doesn't seem right.
And yeah, netcdf only works for you if its already working for you. But you can get some pretty neat data using it.