in reply to Need DBM file that holds data up to 50,000 bytes

DBD::SQLite is very easy. Not so scalable but as you talk about only 1000 keys that's unlikely to become a problem.

Alternatively, you could go for a real database server and use PostgreSQL, which has the excellent hstore datatype. hstore is basically an associative array (=hash).

Of course, with postgres you have a server on your hands that will need some maintainance. It has much more possibilities and performance and much less limitations but it is not nearly as easy as SQLite.

  • Comment on Re: Need DBM file that holds data up to 50,000 bytes

Replies are listed 'Best First'.
Re^2: Need DBM file that holds data up to 50,000 bytes
by pvaldes (Chaplain) on Aug 12, 2014 at 11:01 UTC

    I was asking myself exactly the same question: 'why not one of the bigs like mysql or postgres or firebird or...?'

    Alternatively you could consider a NoSQL database (i.e Mongo) or even plain perl for this. As you said that speed is secondary, and those use text files for storage, a 'terabyte level' size file (much more that what you need probably) is guaranteed in most systems

    mongo tutorial (CPAN)

      Though maybe interesting, Postgres' hstore feature is a language on itself and does not easily integrate with how other access methods work. There is Pg::hstore, but the API is IMHO not very obvious. It for sure is not an easy replacement for DB_File.

      In my perception *all* databases suck. Not all suck the same way, but there is no perfect database (yet). You will need to investigate your needs before making a choice. Oracle has NULL problems (and is costly), MySQL does not follow ANSI in its default configuration and uses stupid quoting, Postgres will return too much by default on big tables, Unify does not support varchar, CSV is too slow, SQLite does not support multiple concurrents sessions, Firebird has no decent DBD (yet), DB2 is bound to IBM, Ingres has not many users in the Perl community etc etc.

      Too many factors to think about. For a single-user easy DB_File replacement, BerkeleyDB comes first, then Tie::Hash::DBD in combination with DBD::SQLite. I say so because neither needs any special environment or configuration. Once you choose a major DB (whatever you choose), you will need additional knowledge or services. My choice then would be Postgres, as it is the easiest to work with and confronts me with the least irritation.

      Nobody mentioned other alternatives yet:


      Enjoy, Have FUN! H.Merijn
        Postgres will return too much by default on big tables

        What do you mean with "return too much"? That sounds serious.

        And I think you give not enough credit for the freedom of the software. Oracle has a great database but it's ridiculously expensive to run even a single instance, and Mysql and BerkeleyDB are pawns in Oracle hands. In my opinion that is a *very* good reason not to use them (I kicked them out when Oracle took them over).

        With regard to cdb: its main annoyance is that it is for databases that do not change (this is by design: it's after all named cdb: "constant database"). Perhaps it is fits the OPs requirements but it is often a pain (kicked that out, too ;-))

        SQLite is nice but pretty simple (and was and is inspired by PostgreSQL, its author told us at PGCon - see here, the talk by Richard Hipp).

        Yeah, I agree: Too many factors to think about :)

        And IMHO PostgreSQL does not suck. :)

        Care to elaborate on the "don't use" with the xDBM_File? What sort of issues?