when the ds becomes too large to store it all in memory, is tying with MLDBM the preferred paradigm? What are the alternatives?
It really depends upon what is in the data structure and how you are using it. Some of the questions that might influence the best choice are:
A web app that runs hundreds or thousands of times an hour for a few milliseconds each time might require a different solution to a data processing app that loads once a day or week and runs for minutes or hours each time.
If only one or two rows are used per run, it might make more sense to leave the data on disk and structure the file such that it can be randomly accessed.
The best solution may depend upon the answers to some or all of these questions, and more that arise from those answers. A clear description of the data set and how it is used would be the quickest way of eliciting good answers.
In reply to Re^7: Storing large data structures on disk
by BrowserUk
in thread Storing large data structures on disk
by roibrodo
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |