I am trying to load data from flat files into Postgres. The script is a a bulk loader which generates individual text files for COPY loading of each table, and indexes are only created once the tables are loaded into Postgres. In the script, I need to keep track of the IDs as they are generated. Without using the disk based hash, this is a nice way to greatly speed import from a straight load into a schema with indexes, etc.
I'm only using BerkeleyDB to track the hash while building the input tables.