in reply to Can you improve upon my algorithm.
Usually, you want and algorithm to work in-place when it holds the full dataset in memory. More rarely, when it holds the dataset in disk.
But both? Can't you at least use some extra RAM as a buffer?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Can you improve upon my algorithm.
by BrowserUk (Patriarch) on Mar 12, 2015 at 13:09 UTC | |
Can't you at least use some extra RAM as a buffer? I frequently need to sort huge (upto and sometimes beyond 100GB) files of fixed length records. External sorting programs -- gnu sort or MS sort -- are very bad at this; it takes days. So finally I got around to writing my own. Part of what makes general purpose algorithms bad at this, is the time they spend writing to and re-reading from spill files. Another part is that they need to deal with variable length records. My approach is to sort the files in-place on disk by using memory mapped IO. Map a (available physical memory-sized) chunk, and sort it, map a chunk and sort it. Then merge the sorted chunks. By making the chunks 1/2 the size of available physical memory I can ensure I can have full access to two complete chunks at any time. One approach to merging the chunks is to sort them together in pairs in a kind of bi-directional bubble sort. Note: each of the letters here represents a (say) 2GB chunk of sorted memory. By sorting them pair-wise and moving left-to-right, then right-to-left, (one less each time) eventually everything ends up in the right place. This works; it's not very efficient.: <Reveal this spoiler or all in this thread>
So now I'm looking at insertion sorting the pairs together. Now to answer your questionYes. But how much is required to significantly improve things? That is: I typically have 5 to 6 GB of my 8GB physical memory available when I'm not running something important. So, I could split that into (say) 2 x 2GB chunks and have 1 or 2GB as scratch space. But is 1 or 2GB enough to make a significant difference when merging 2 x 2GB? As an additional complication to assessment, that scratch space would be outside of the memory mapped file, so like spill-files, whatever is copied into it will eventually have to be copied back. Less costly than a spill file but still a cost to be considered. In addition, whatever scratch space I use comes out of the buffer space, thus requiring more buffers; thus more (smaller) partial sorts and more buffers to merge. It's a difficult assessment. In the standard merge phase of an external sort, you have an unlimited buffer -- the output file -- as the repository of the merged records. I originally thought that as the merge phase picks the next record from the sorted spill-files one at a time, I'd be able to put them back into the space vacated by the records extracted from the buffers for merge cross-comparison. It doesn't work out like that. You have to ripple the replaced record down into its proper place in order to maintain the sub-sorted order required for the merge to work. I looked at trying to do that on a n-at-a-time basis, but it gets very messy with mapping and remapping sub-chunk sized bits (pages) of the N buffers. So then I came up with this pair-wise, two at a time process which makes dealing with the mapping problem much simpler. And it works. In C, qsorting a 100 million 8-byte records takes ~40 seconds. qsorting that as 2x50 million record buffers takes ~37 seconds, and insertion sorting them together takes ~3 seconds:
It looks less rosy with more partitions:
But nothing show stopping. The time cost of the insertion sort for multiple buffers is not linear. It (roughly) doubles with each extra buffer. This is because the distance you have to ripple out of place records gets further each time. I think there is scope for improvement there. Eg. I'm currently merging the records 'forwards' -- 1&2, 1&3, 1&4, 1&5... -- does it make any difference if I reverse that and so 4&5, 3&4, 2&3, 1&2? Obviously when merging 1 & 2, there will still be records from 1 that need to move all the way to buffer 5. But shouldn't I be able to skip ahead by comparing the record against the top record in each of the now order buffers, and insert it there? Trouble is, I'd still need to ripple all those displaced above it, back through all the other buffers. Maybe that's where the scratch space comes in. As I ripple records of the top of a buffer (to accommodate a new insertion), rather than rippling the rest of the buffers back to that insertion records source, I spill it into a (say) page-sized 'defer' scratch buffer. And I only ripple it back once that buffer is full. With luck, as the contents come of the top of an already sorted buffer, the whole buffer can be insert into the bottom of the preceding buffer as a block. But does that help? You still need to move the preceding buffer(s) 'up' by a buffer-size to accommodate it; and then their spills into the preceding buffer ... At that point, I can't keep the costs/benefits analysis clear in my head... With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
I'm with torvalds on this
In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
| [reply] [d/l] [select] |
by salva (Canon) on Mar 12, 2015 at 22:04 UTC | |
A N-way merge uses almost no memory, just enough to avoid the trashing caused by reading from N files in parallel (say 10MB per-way, for a 100GB file and 50 chunks, that means 10_000 seeks, approximately 200s in a modern disk ignoring fragmentation and FS overhead). The computational complexity of the merge is O(MlogN) where M is the dataset size (100GB). Actually if you count the number of comparisons that have to be performed it is c = M*log2N. In your case, as N = 50, c = 8*M = 800GB. Current RAM bandwidth is around 10GB, but that is for peaks, lets assume that in practice the memory bandwidth is 1GB/s. That means that those comparisons (the bottleneck there is the RAM) would take 800GB / 1GB/s = 800s. 15 minutes! So, where does the time goes? in reading the data from disk into memory and writing it back. With a good HDD (100MB/s) reading+writing 100GB are 2 * 100GB / 100MB/s = 2000s. In summary, merging the 50 blocks, should take around 3000s. 1 hour! In theory!!! | [reply] |
by BrowserUk (Patriarch) on Mar 13, 2015 at 04:40 UTC | |
In theory!!!! In practice! My E: drive is brand new, barely used and connected to a SATA-3 i/f. wc -c does almost nothing but increment a variable, and it reads a 34GB file in 192 secs, at a rate of 171MB/s:
I created 50 1GB files:
I then ran the following script which:
Total runtime (projection): 85 hours; average read rate: 10MB/minute or 171kb/second. The script: Note: there is no heap or merge or insertion sort of the records being 'merged', indeed no comparisons whatsoever; and no writing to disk. Just the generation of a random number, the copying of 16 bytes, and the adjustment of a pointer (CUR) for each record. But it happens 3.3 billion times. With the result that there is almost no IO activity for 98% of the time, and then occasional bursts as 10MB buffers are repopulated. As the nature of random is to distribute evenly, the 50 x 10MB reads tend to come grouped pretty close together; roughly every 50 minutes. With 102 buffers per 1GB file, thats 102 * 50 = 5100 minutes or 85 hours or 3 1/2 days; which mirrors the timings I've experienced using external sort programs. I'm not going to let it run to completion. I hope you'll understand why. Update:At the point of posting this, the process had just been running for exactly 2 hours and has read 890.4MB. That's 7.42MB/minute which projects to 4.8 days to complete the 50GB. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
I'm with torvalds on this
In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
| [reply] [d/l] [select] |
by salva (Canon) on Mar 13, 2015 at 10:06 UTC | |
by BrowserUk (Patriarch) on Mar 13, 2015 at 11:14 UTC | |
| |
|
Re^2: Can you improve upon my algorithm.
by bitingduck (Deacon) on Mar 12, 2015 at 15:22 UTC | |
Your requirements are quite unusual. In place handling of huge datasets have been a theme of BrowserUk's questions for a while now. They're interesting because they're things you generally wouldn't think twice about for small data sets, but in a resource constrained environment (time, memory) how you decide to do them can have a big effect on performance, or even solvability. It's kind of like going back to 1980 with a 1 Mhz machine with 16 kB of RAM and trying to do anything at all. Back in those days I once made a list of all the 4 letter combinations and stored them to disk, then printed them and discovered my printer was as fast as my floppy drive. | [reply] |