If you really want to do things by yourself ....
Get a honking big file, say 4GB, and a smaller file, say 4 MB.
To manipulate file 17623, seek location 17623 * 4 in the smaller file, and read in the 4 byte word you find there, call it <offset>. Seek location <offset> in the larger file, and read bytes until you reach the EOF marker. Alternately, you could store a 2 byte length in the index file.
If you need to enlarge file N, copy it to the end of the large file, where you can do whatever you want. You can keep an index of unused 'holes', and move files that fit into the hole instead of to the end of the file. When the file runs out of space, compact the file to a new file, generating a new index.
On the other hand, DB companies spend millions of dollars and devote hundreds of employees to maximize the efficiency of such operations. Can you really out-perform them?
--
TTTATCGGTCGTTATATAGATGTTTGCA
In reply to Re: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)
by TomDLux
in thread Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)
by rjahrman
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |