Anyway, because the majority of these files will be far under the 4KB standard cluster size, storing them as separate files would waste a lot of disk space--which is limited in this project. So, I'm going to attempt to put all of the files together into one huge file, and then have a separate ID-to-file-location index in a separate file (though it will be stored in RAM as arrays and/or hashes until the building of the mega-file is complete). Simple enough, right? Until I realized one _major_ problem with this approach . . .
The fact that the files are built at the same time! For every integer that is added in the middle of the mega-file (e.g. all of them), the location of every sub-file would have to be changed!
Now my thinking is to have one array that stores the size of every sub-file, as well as a separate array that, for every 1000 sub-files or so, has the total size of all of the sub-files before it. Hence, to get the file location of a sub-file while building the mega-file I would only have to go back to the last "marker" and then add to it the sizes of all sub-files between the marker and the desired location. If I did this, I could also do my own very small "cluster" size, such that the numbers would only have to be updated every 100 entries or so, but the wasted disk space would be minimal.
My question is (finally--grin), how would you attack this problem? Any ideas?
HUGE thanks in advance!
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |