in reply to Re: Berkeley DB performance, profiling, and degradation...
in thread Berkeley DB performance, profiling, and degradation...

Thanks for your reply, Randal.

Actually, I'm explicitly avoiding a more complex relational database for the reason that every action is so costly--a simple key:value DB should be much, much faster. In fact, I think MySQL is built on top of the Berkeley DB, so performance of MySQL (which in some circumstances is faster than PostgreSQL) should be at least one abstraction layer slower than Berkeley DB.

The process in question is the low priority "store into a database" process--we already have logs and when in daemon mode the index will be drawing its data from a File::Tail of the log file. The problem is that it will fall behind very quickly with a large database. There is no downtime in which it can catch up--the indexer must maintain 10-15 entries per second, or it will lose data. Not to mention a full index from scratch (required on occasion) will take two days or more. All very problematic.

So perhaps changing to another method of entering and retrieving from the database is in order--I'm not married to the tied hash concept, it just seemed the Right Way when I got started. Hmm...Back to the perldocs.

  • Comment on Re: Re: Berkeley DB performance, profiling, and degradation...

Replies are listed 'Best First'.
Re: Re: Re: Berkeley DB performance, profiling, and degradation...
by trs80 (Priest) on Feb 20, 2002 at 03:34 UTC
    MySQL itself is not based on BerkeleyDB, but to allow for transactions support, BerkeleyDB and InnoDB table types were added during the 3.23.x series.

    Some speed propaganda on MySQL can be found on their benchmarking page

    DISCLAIMER: I am not advocating MySQL as a solution to this particular problem. I am only attempting to dispel the myth that MySQL is itself based on BerkeleyDB.