in reply to Re^7: Out of Memory when generating large matrix
in thread Out of Memory when generating large matrix
I have a very similar, before-the-dawn-of-time story -- that I'm sure I've mentioned here before and probably in response to a previous sundial "system sort" solution.
(From long ago memory, so the details my be fuzzy.) 60 million records sorted on 7(or 9) keys taking 2 weeks on twin PDP-11/60s.
Reverse the order of the keys reduced the total time to (I think) less than a day.
The reason: the way the records were stored, the original key order meant doing a seek for every next record, and for almost every sub sort.
Reversing the keys meant the first pass read the records sequentially. Having grouped records by that key, subsequent subsorts tended to only reorder within a small group of records that tended to be close to each other; hence far less disk/memory cache misses.
Another big timesaver that happened before the big final mergesort, was to arrange for temporary spill files to be written to "the other" diskpack, to whichever disk pack the file being processed was on. It applied to pretty much every process, and cut most of their run times in half.
It hard to believe now that in my working lifetime it could have taken a month (before both changes) to sort 60million records. (That was "big data" back then :) )
It's like something out of a Victorian novel where they describe it taking 3 days from London to Bath and 10 days to York.
|
|---|