in reply to How to best handle memory intensive operations

If you can make your data look like a hash, you can use something like BerkeleyDB, which is quite good at handling large amounts of data (with caching, etc.). I wouldn't use an RDBMS unless I had relations (i.e. more than one table) or a need for query language. You don't have either one of these.

By tuning the page size and cache size, and giving other hints to BerkeleyDB, you can get very good performance, and you can whip up something to try very quickly. I'd suggest using BerkeleyDB through its hash interface, and using a BTree, and then seeing if it's fast enough. The results may surprise you.

  • Comment on Re: How to best handle memory intensive operations

Replies are listed 'Best First'.
Re: Re: How to best handle memory intensive operations
by JPaul (Hermit) on Jul 27, 2001 at 00:59 UTC
    BerkelyDB is a RDBMS.
    The only thing that makes it different is its embedded. I'll give it a try, however, on your advise.

    JP,

    -- Alexander Widdlemouse undid his bellybutton and his bum dropped off --

      BerkeleyDB is a DBMS but not a relational one in the usual sense (unless you count the ability to do joins). Most people today think of RDBMS's as things that provide things like an SQL interface, multiple table columns, etc.; although you could build these on top of BerkeleyDB (as MySQL does), it doesn't have them by itself.