Hello, I have a question on the size of large data sets with respect to BerkeleyDB. I am trying to load a hash table with a data set of over 22 million unique tokens. I have tied the hash table using the BerkelyDB::Hash module.
The problem I am having is that it took 3 days to load only 15% of the data. I was curious if anyone else has tried to load that much data using BerkeleyDB (or any other method) and succeeded in a reasonable amount of time.
Thanks!