My expectation is that most databases would use a well-known datastructure (such as a BTree) to store this kind of data. Which avoids a million directory entries, and also allows for variable length data. I admit that an RDBMS might do this wrong. But I'd expect most of them to get it right first try. Certainly BerkeleyDB will.
Using DB_File:
Actual data stored (1000000 * 512 * 4) : 1.90 GB
Total filesize on disk : 4.70 GB
Total runtime (projected based on 1%) : 47 hours
Actual data stored (1000000 * 512 * 4) : 1.90 GB
Total filesize on disk : 17.00 GB (Estimate)
Total runtime (projected based on 1%) : 80 hours* (default settings)
Total runtime (projected based on 1%) : 36 hours* ( cachesize => 100_000_000 )
(*) Projections based on 1% probably grossly under-estimate total runtime as it was observed that even at these low levels of fill, each new .1% required longer than the previous.
Further, I left the latter test running while I slept. It had reached 29.1% prior to leaving it. 5 hours later it had reached 31.7%. I suspect that it might never complete.
Essentially, this bears out exactly what I predicted at Re: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help).
In reply to Re^3: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help)
by BrowserUk
in thread Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)
by rjahrman
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |