in reply to Need DBM file that holds data up to 50,000 bytes

BerkeleyDB seems promising but I couldn't find a limit on the data or number of keys
The limit on data is somewhere in the terabyte range. I don't think there is a limit on the number of keys (I have one with 70 million keys). SQLite is excellent but if you want a drop-in replacement DB_File or BerkeleyDB is the way to go.
  • Comment on Re: Need DBM file that holds data up to 50,000 bytes

Replies are listed 'Best First'.
Re^2: Need DBM file that holds data up to 50,000 bytes
by Tux (Canon) on Aug 11, 2014 at 15:27 UTC

    I agree with that statement for BerkeleyDB, but I wrote Tie::Hash::DBD as I ran into serious limitations when I used DB_File on a system with low resources. Those limitations caused the complete hash to be invalid.


    Enjoy, Have FUN! H.Merijn
      I haven't come up against those limitations myself, but then again I migrated all of my DB_File usage to BerkeleyDB some time ago.
      what limitations, got link?

        Slow old HP-UX system with 50+ users, just 2 Gb of RAM, less than 2 Gb of disk space available and a process running for over 48 hours. Dropping in Tie::Hash::DBD to use the already open Oracle database (server was another machine) caused the process to finish instead of crash. It was a bit slower, but at least it finished.

        You can say RAM is cheap nowadays, but one cannot force customers to upgrade machines.


        Enjoy, Have FUN! H.Merijn
Re^2: Need DBM file that holds data up to 50,000 bytes
by bulrush (Scribe) on Aug 11, 2014 at 21:36 UTC
    Are you saying I can have a single hash key that holds terabytes of information, limited only by my hardware and OS capabilities?
    Perl 5.8.8 on Redhat Linux RHEL 5.5.56 (64-bit)
      Max file size is 256 terabytes.

      From the FAQ:

      Are there any constraints on table size?
      The table size is generally limited by the maximum file size possible on the file system.

      Are there any constraints on the number of records that can be stored in a table?
      There is no practical limit. It is possible to store the number of records that can be indexed in a signed 64-bit value.

      Are there any constraints on record size?
      There is no practical constraint. The maximum length of a string or blob field is 1 billion bytes.
      If you've got a 64-bit Perl build running on a 64-bit OS, then yes, that's indeed the case. That said, tangent was talking about BerkeleyDB's limitations, not Perl's.