in reply to Re^3: In-place sort with order assignment (-u)
in thread In-place sort with order assignment

Interesting thought. I have a long process running currently that I'd like to let finish without risking swapping/ time penalties, but I'll give it a go later.

I do wonder if it isn't just moving the goal posts though. Instead of sorting 10e6 unique words, it would mean sorting 300e6 to 1e9 words and then discarding the vast majority of them.

It would address the memory problem, but at N log N, with N*30 to n*100, the time cost might be prohibitive?


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP an inspiration; A true Folk's Guy
  • Comment on Re^4: In-place sort with order assignment (-u)

Replies are listed 'Best First'.
Re^5: In-place sort with order assignment (-u)
by tye (Sage) on Sep 21, 2010 at 07:00 UTC

    Is "/usr/bin/sort -u" really so dumb as to sort all of the duplicates and then only eliminate them at the end? I would be a bit surprised as well as sad if that were so. I would also be surprised if it used a hash or similar structure that tends to thwart "divide and conquer" strategies that end up being required for huge sorts like /usr/bin/sort is designed for.

    I'd expect a modified sorting algorithm that eliminates duplicates whenever they "touch". But I've also long wondered why I have yet to run into any discussions of a formal algorithm like that (not having searched for such). My somewhat wild guess is that the performance of such an algorithm would be closer to O(U*log(U)) than to O(D*log(D)) (U= number of unique items, D= total number of items including duplicates). A lower bound would be O(D+U*log(U)) and I suspect that is a pretty close bound.

    As an aside, even with no customization of the sorting algorithm, a huge number of duplicates would speed up worst-case performance. Consider a huge list of just one item repeated over and over. Sorting it is O(N) because you need so many fewer comparisons because two things being equal means that they aren't "out of order" and so no further checking is required to pin down where each belongs in the output.

    That makes a somewhat interesting case that can be estimated via observation. Generate huge lists of duplicates of equal size but with much different quantities of duplication and compare the run-times of '/usr/bin/sort' and '/usr/bin/sort -u' against them. Repeat for a wide range of list sizes. Perhaps also throw in Windows' sort.exe which doesn't have -u and might lack some optimizations related to duplicates? (No, I don't plan to spend any time running such an experiment any time soon.)

    - tye        

      Is "/usr/bin/sort -u" really so dumb as to sort all of the duplicates and then only eliminate them at the end?

      Yes.

      The 17 second difference between sorting 10e6 records containing just 100 values, with & without -u, is all down to only having to write 100 records to the final output file.

      If it were only sorting 100 records, it would take far less time.

      C:\test>perl -E"say int( rand 100 ) for 1 .. 10e6" > 10e6(of100).dat C:\test>timeit \perl64\bin\sort.exe -u -o sorted "10e6(of100).dat" Took: 70.141548560 seconds C:\test>timeit \perl64\bin\sort.exe -o sorted "10e6(of100).dat" Took: 87.626558680 seconds

      The salient sub mergefps() (called from merge(), called from sort(), called from main()) is:

      static void mergefps (char **files, register int nfiles, FILE *ofp, const char *output_file) { FILE *fps[NMERGE]; /* Input streams for each file. */ struct buffer buffer[NMERGE]; /* Input buffers for each file. */ struct line saved; /* Saved line storage for unique check. */ struct line const *savedline = NULL; /* &saved if there is a saved line. */ size_t savealloc = 0; /* Size allocated for the saved line. * +/ struct line const *cur[NMERGE]; /* Current line in each line table. +*/ struct line const *base[NMERGE]; /* Base of each line table. */ int ord[NMERGE]; /* Table representing a permutation of fps, such that cur[ord[0]] is the smallest line and will be next output. */ register int i, j, t; struct keyfield *key = keylist; saved.text = NULL; /* Read initial lines from each input file. */ for (i = 0; i < nfiles; ) { fps[i] = xfopen (files[i], "r"); initbuf (&buffer[i], sizeof (struct line), MAX (merge_buffer_size, sort_size / nfiles)); if (fillbuf (&buffer[i], fps[i], files[i])) { struct line const *linelim = buffer_linelim (&buffer[i]); cur[i] = linelim - 1; base[i] = linelim - buffer[i].nlines; i++; } else { /* fps[i] is empty; eliminate it from future consideration. */ xfclose (fps[i], files[i]); zaptemp (files[i]); free (buffer[i].buf); --nfiles; for (j = i; j < nfiles; ++j) files[j] = files[j + 1]; } } if (! ofp) ofp = xfopen (output_file, "w"); /* Set up the ord table according to comparisons among input lines. Since this only reorders two items if one is strictly greater tha +n the other, it is stable. */ for (i = 0; i < nfiles; ++i) ord[i] = i; for (i = 1; i < nfiles; ++i) if (0 < compare (cur[ord[i - 1]], cur[ord[i]])) t = ord[i - 1], ord[i - 1] = ord[i], ord[i] = t, i = 0; /* Repeatedly output the smallest line until no input remains. */ while (nfiles) { struct line const *smallest = cur[ord[0]]; /* If uniquified output is turned on, output only the first of an identical series of lines. */ if (unique)

      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        If it were only sorting 100 records, it would take far less time.

        Just as an aside, I have to reply to this paragraph with, of course, "Duh". :)

        The 17 second difference between sorting 10e6 records containing just 100 values, with & without -u, is all down to only having to write 100 records to the final output file.

        If that is your thesis, then you picked a really lousy experiment to test it. Luckily, it is easy to disprove.

        perl -E"say int(rand(1e6)) for 1..1e6" | .\time .\sort >nul 3.65 user 3.73 user 3.65 user 0.07 system 0.07 system 0.09 system 7.63 elapsed 7.61 elapsed 7.57 elapsed perl -E"say int(rand(1e5)).9 for 1..1e6" | .\time .\sort >nul 3.56 user 3.48 user 3.40 user 0.07 system 0.18 system 0.12 system 5.17 elapsed 5.19 elapsed 5.21 elapsed perl -E"say int(rand(1e4)).99 for 1..1e6" | .\time .\sort >nul 3.32 user 3.37 user 3.34 user 0.04 system 0.06 system 0.06 system 4.97 elapsed 5.12 elapsed 4.96 elapsed perl -E"say int(rand(1e3)).999 for 1..1e6" | .\time .\sort >nul 3.17 user 3.09 user 3.20 user 0.07 system 0.09 system 0.06 system 4.79 elapsed 4.84 elapsed 4.82 elapsed perl -E"say int(rand(1e2)).9999 for 1..1e6" | .\time .\sort >nul 2.84 user 2.74 user 2.92 user 0.07 system 0.20 system 0.03 system 4.47 elapsed 4.62 elapsed 4.53 elapsed perl -E"say int(rand(10)).99999 for 1..1e6" | .\time .\sort >nul 2.57 user 2.54 user 2.46 user 0.09 system 0.03 system 0.06 system 4.07 elapsed 4.07 elapsed 4.09 elapsed perl -E"say int(rand(1)).999999 for 1..1e6" | .\time .\sort >nul 1.60 user 1.60 user 1.60 user 0.04 system 0.06 system 0.10 system 3.15 elapsed 3.16 elapsed 3.12 elapsed

        Those runs all lack "-u" and so all have identical amounts of I/O and identical numbers of records to sort. The only difference is how many duplicates are present. Clearly, duplicates result in a non-trivial speed-up (even with a "dumb" implementation of "sort -u", one not even doing "-u") even at only 10x duplicates and not due to reduced I/O.

        Is that really the only application of unique in the source code? If so, don't you also find it sadly dumb that the routine that wrote to those files that are to be merged didn't also trivially avoid writing duplicates?

        - tye        

      Is "/usr/bin/sort -u" really so dumb as to sort all of the duplicates and then only eliminate them at the end?
      Yes.

      A little history. Not so long ago, memory was expensive. Computers didn't have multi-Gb RAM available. At best, they had a few Mb, to be shared with dozens, if not hundreds, of users and their programs. sort was written with this mindset: sorting huge amount of data, while being able to do this without consuming heaps of memory. To eliminate duplicates without first sorting, you need something like a hash - which either means keeping all the data in memory, or use really complex structured files.

      A lower bound would be O(D+U*log(U)) and I suspect that is a pretty close bound.
      Can you eliminate duplicates in ο(N * log N) worst case? (That's little-oh, not big-oh) If not, the O(D * log D) is a tight lowerbound.
        A little history.

        Thanks for the condescension.

        To eliminate duplicates without first sorting, you need something like a hash

        A little much-more-recent history that you appear to have completely missed:

        I'd expect a modified sorting algorithm that eliminates duplicates whenever they "touch".

        Not to mention that I already touched on the problem with "something like a hash".

        Some less recent history: The earliest versions of 'sort' lacked the '-u' option. Of course, one could use "sort | uniq" to ignore duplicates. So somebody implemented the "-u" option. I stand by the assertion of "so dumb" (or "so very lazy") to implement "-u" by only filtering output at the very last step.

        Can you eliminate duplicates in ο(N * log N) worst case? (That's little-oh, not big-oh) If not, the O(D * log D) is a tight lowerbound.

        I already noted that sorting D exact duplicates is O(D) without a single change to the algorithm. So in the extreme case of U==1, O(D*log(D)) is obviously not a lower bound. Having spent more time considering this from different angles, I'm nearly convinced that eliminating duplicates while sorting (not the false dichotomy of "either before or after" that you propose) leads to O(D*log(U)) performance.

        - tye