in reply to Re^2: Can you improve upon my algorithm.
in thread Can you improve upon my algorithm.
A N-way merge uses almost no memory, just enough to avoid the trashing caused by reading from N files in parallel (say 10MB per-way, for a 100GB file and 50 chunks, that means 10_000 seeks, approximately 200s in a modern disk ignoring fragmentation and FS overhead).
The computational complexity of the merge is O(MlogN) where M is the dataset size (100GB). Actually if you count the number of comparisons that have to be performed it is c = M*log2N. In your case, as N = 50, c = 8*M = 800GB. Current RAM bandwidth is around 10GB, but that is for peaks, lets assume that in practice the memory bandwidth is 1GB/s. That means that those comparisons (the bottleneck there is the RAM) would take 800GB / 1GB/s = 800s. 15 minutes!
So, where does the time goes? in reading the data from disk into memory and writing it back. With a good HDD (100MB/s) reading+writing 100GB are 2 * 100GB / 100MB/s = 2000s.
In summary, merging the 50 blocks, should take around 3000s. 1 hour! In theory!!!
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Can you improve upon my algorithm.
by BrowserUk (Patriarch) on Mar 13, 2015 at 04:40 UTC | |
by salva (Canon) on Mar 13, 2015 at 10:06 UTC | |
by BrowserUk (Patriarch) on Mar 13, 2015 at 11:14 UTC | |
by salva (Canon) on Mar 13, 2015 at 11:31 UTC | |
by BrowserUk (Patriarch) on Mar 13, 2015 at 11:43 UTC | |
|