in reply to Poor Person's Database

For a cleaner, yet slow solution, check out DBD::CSV. This module allows you to use SQL with CSV files. Very handy if you don't have a database installed! It won't be terribly fast or robust, but certainly no worse than what you are currently faced with and if you have the option to convert to a real database later on, the conversion is a snap!

One of the interesting features about this is that it allows you to create and drop tables. Ordinarily, this is not a function supported by DBI as this is too database specific. However, since this function would be tedious to implement by hand with CSV files, this module handles it for you. Read through the documentation as there are some pitfalls there, but I suspect that they are much less serious than doing this by hand.

Cheers,
Ovid

Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.

Replies are listed 'Best First'.
Re: (Ovid) Re: Poor Person's Database
by voyager (Friar) on Jun 20, 2001 at 19:09 UTC
    I second the suggestion to check out DBD::CSV.

    The idea of being "SQL like" and having the upgrade path to a "real" SQL is certainly a plus.

    As to performance, I've found that for reasonably small tables, selecting is no slower than for a real db since there's no overhead of talking to the database. Insert/Update/Delete are murder for large files since I think it reads/writes the entire file for each operation.

    But since the data is in a plain text file, for large update operations (like building the search index), you could probably do a Perl-only choice that would be very fast to write the file, then SQL to read.

    And finally, when you don't want to do SQL, you can always use vi to "update the table".