If you sort the chunks while generating, then apply the n-way merge as described—that full procedure results in a 100GB write, plus another 100GB of reads and 100 GB of writes. In total, 300GB of streaming (external memory access).
You are searching for an algorithm that does better, or claim to have found one?
In reply to Re^3: [OT] A measure of 'sortedness'?
by Anonymous Monk
in thread [OT] A measure of 'sortedness'?
by BrowserUk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |