in reply to Re: Can you improve upon my algorithm.
in thread Can you improve upon my algorithm.
Can't you at least use some extra RAM as a buffer?
I frequently need to sort huge (upto and sometimes beyond 100GB) files of fixed length records. External sorting programs -- gnu sort or MS sort -- are very bad at this; it takes days. So finally I got around to writing my own.
Part of what makes general purpose algorithms bad at this, is the time they spend writing to and re-reading from spill files. Another part is that they need to deal with variable length records.
My approach is to sort the files in-place on disk by using memory mapped IO. Map a (available physical memory-sized) chunk, and sort it, map a chunk and sort it. Then merge the sorted chunks.
By making the chunks 1/2 the size of available physical memory I can ensure I can have full access to two complete chunks at any time.
One approach to merging the chunks is to sort them together in pairs in a kind of bi-directional bubble sort.
Note: each of the letters here represents a (say) 2GB chunk of sorted memory. By sorting them pair-wise and moving left-to-right, then right-to-left, (one less each time) eventually everything ends up in the right place. This works; it's not very efficient.:
So now I'm looking at insertion sorting the pairs together.
Yes. But how much is required to significantly improve things?
That is: I typically have 5 to 6 GB of my 8GB physical memory available when I'm not running something important. So, I could split that into (say) 2 x 2GB chunks and have 1 or 2GB as scratch space. But is 1 or 2GB enough to make a significant difference when merging 2 x 2GB?
As an additional complication to assessment, that scratch space would be outside of the memory mapped file, so like spill-files, whatever is copied into it will eventually have to be copied back. Less costly than a spill file but still a cost to be considered. In addition, whatever scratch space I use comes out of the buffer space, thus requiring more buffers; thus more (smaller) partial sorts and more buffers to merge. It's a difficult assessment.
In the standard merge phase of an external sort, you have an unlimited buffer -- the output file -- as the repository of the merged records.
I originally thought that as the merge phase picks the next record from the sorted spill-files one at a time, I'd be able to put them back into the space vacated by the records extracted from the buffers for merge cross-comparison. It doesn't work out like that. You have to ripple the replaced record down into its proper place in order to maintain the sub-sorted order required for the merge to work.
I looked at trying to do that on a n-at-a-time basis, but it gets very messy with mapping and remapping sub-chunk sized bits (pages) of the N buffers. So then I came up with this pair-wise, two at a time process which makes dealing with the mapping problem much simpler. And it works.
In C, qsorting a 100 million 8-byte records takes ~40 seconds. qsorting that as 2x50 million record buffers takes ~37 seconds, and insertion sorting them together takes ~3 seconds:
C:\test\C>inPlaceMerge 100000000 2 qsort took 39.071390618 secs for 100000000 element array qsort took 37.654166926 secs for 2 partitions of 100000000 element arr +ay isort took 2.540992389 secs for 2 partitions of 100000000 element arra +y
It looks less rosy with more partitions:
C:\test\C>inPlaceMerge 100000000 4 qsort took 39.024688029 secs for 100000000 element array qsort took 36.237764463 secs for 4 partitions of 100000000 element arr +ay isort took 6.176087620 secs for 4 partitions of 100000000 element arra +y C:\test\C>inPlaceMerge 100000000 8 qsort took 38.918586479 secs for 100000000 element array qsort took 34.792559013 secs for 8 partitions of 100000000 element arr +ay isort took 12.882464701 secs for 8 partitions of 100000000 element arr +ay
But nothing show stopping.
The time cost of the insertion sort for multiple buffers is not linear. It (roughly) doubles with each extra buffer. This is because the distance you have to ripple out of place records gets further each time.
I think there is scope for improvement there. Eg. I'm currently merging the records 'forwards' -- 1&2, 1&3, 1&4, 1&5... -- does it make any difference if I reverse that and so 4&5, 3&4, 2&3, 1&2?
Obviously when merging 1 & 2, there will still be records from 1 that need to move all the way to buffer 5. But shouldn't I be able to skip ahead by comparing the record against the top record in each of the now order buffers, and insert it there? Trouble is, I'd still need to ripple all those displaced above it, back through all the other buffers.
Maybe that's where the scratch space comes in. As I ripple records of the top of a buffer (to accommodate a new insertion), rather than rippling the rest of the buffers back to that insertion records source, I spill it into a (say) page-sized 'defer' scratch buffer. And I only ripple it back once that buffer is full. With luck, as the contents come of the top of an already sorted buffer, the whole buffer can be insert into the bottom of the preceding buffer as a block.
But does that help? You still need to move the preceding buffer(s) 'up' by a buffer-size to accommodate it; and then their spills into the preceding buffer ...
At that point, I can't keep the costs/benefits analysis clear in my head...
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Can you improve upon my algorithm.
by salva (Canon) on Mar 12, 2015 at 22:04 UTC | |
by BrowserUk (Patriarch) on Mar 13, 2015 at 04:40 UTC | |
by salva (Canon) on Mar 13, 2015 at 10:06 UTC | |
by BrowserUk (Patriarch) on Mar 13, 2015 at 11:14 UTC | |
by salva (Canon) on Mar 13, 2015 at 11:31 UTC | |
|