But as for DB_File, Its size limits are the limit of a file in your filesystem, the amount of space you have, or (depending on configuration parameters) 100 terabytes. Size is simply not an issue, though with large data sets you are strongly advised to go with a BTree rather than a hash.
However there are two large drawbacks with using an interface to Berkeley DB. The first is that (as you found) it has a very simple data model. It will handle a lot of data, but it does not offer (yet) SQL or other complex data structures. (Though I heard rumor that they were considering adding this, but I don't know whether they have.) The second is that it maps the library directly into your process. This makes recovery and restoration of an active system hard. It also makes it hard to have a system with multiple webservers whose data needs to be kept in sync. (You cannot mount the database over NFS.)
And finally about postgres. If you want a full-featured open-source database now, I would recommend it. And while I have not played with it, I understand that not only does it have stored procedures, but you can even choose to write them in Perl. It will not offer the raw performance of MySQL or Berkeley DB. (The price you get for features and safety.) Heavy database folks tell me that its optimizer is not as good as Oracle. But then again the person who mentioned that to me also tends to load 50 GB of data into his database every week. I doubt that your needs are quite so extreme...
In reply to Re (tilly) 1: exploring XP DBI DBD DBM SQL RDMS MySQL Postgres
by tilly
in thread exploring XP DBI DBD DBM SQL RDMS MySQL Postgres
by mandog
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |