The original Berkeley DB didn't support transactions. Even though the newer versions does support transactions, it doesn't seem that DB_File uses supports it.
If it works as I used to know it, every change is written straight to the file as it is done, but you can't use the file size as a safe measurement of every write.
A different way to speed up the load is randomizing the order of the keys (or a pseudo random map of the keys themselves, such as MD5). I know it sounds odd, but if you are using B-tree storage and the keys are sorted, you get very long load times because the tree is constantly being rebalanced.
My suggestion with regard to trying hash storage still stands. Try that first.
In reply to Re^2: scripts stops running .. almost
by mzedeler
in thread scripts stops running .. almost
by radu_marg
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |