With significant caveats, yes, you are wrong.
The key to handling large data sets is to have efficient
data structures and algorithms. A dbm does this. Given
that there aren't significantly better ones available, a
relational database cannot improve much. Oh,
sometimes it might be possible to get a slight win from a
known fixed structure. Mostly if you do, you lose it
several times over from having the relational database.
(Particularly if, as is usually the case, you have the
database code in another process that your process needs
to communicate with.)
However change the picture by saying that you don't
have a key/value relationship, but rather have a tabular
structure where you want to be able to do quick lookups on
either of two different fields. Stuff that into any
decent relational database, add an index on those two
fields. Done. What do you have to do to get that with
dbms? Well, you could store your data in a long linear
file, and then store offsets in a key/value relationship
in a couple of dbms (one for each index). This is a lot
of custom code to duplicate what the relational database is
doing already. This is a lot of work to duplicate what
the database did already. And should the spec change just
slightly (say you need a third field), you have a lot of
recoding (and debugging) to do.
Sounds like if you need even the simplest of basic
structures, relational databases give a nice development win
over a dbm. And should you need 2 tables and a join, well
your average programmer will probably program it by
searching each table once for each element in the other.
(Either explicitly or implicitly.) Avoiding that mistake
by default makes your algorithms scale much better. Need I
say more about how quickly the relational database pays
for itself?
But if your problem is dead simple, then the relational
database is a loss. | [reply] |