in reply to Re^4: In-place sort with order assignment (-u)
in thread In-place sort with order assignment

Is "/usr/bin/sort -u" really so dumb as to sort all of the duplicates and then only eliminate them at the end? I would be a bit surprised as well as sad if that were so. I would also be surprised if it used a hash or similar structure that tends to thwart "divide and conquer" strategies that end up being required for huge sorts like /usr/bin/sort is designed for.

I'd expect a modified sorting algorithm that eliminates duplicates whenever they "touch". But I've also long wondered why I have yet to run into any discussions of a formal algorithm like that (not having searched for such). My somewhat wild guess is that the performance of such an algorithm would be closer to O(U*log(U)) than to O(D*log(D)) (U= number of unique items, D= total number of items including duplicates). A lower bound would be O(D+U*log(U)) and I suspect that is a pretty close bound.

As an aside, even with no customization of the sorting algorithm, a huge number of duplicates would speed up worst-case performance. Consider a huge list of just one item repeated over and over. Sorting it is O(N) because you need so many fewer comparisons because two things being equal means that they aren't "out of order" and so no further checking is required to pin down where each belongs in the output.

That makes a somewhat interesting case that can be estimated via observation. Generate huge lists of duplicates of equal size but with much different quantities of duplication and compare the run-times of '/usr/bin/sort' and '/usr/bin/sort -u' against them. Repeat for a wide range of list sizes. Perhaps also throw in Windows' sort.exe which doesn't have -u and might lack some optimizations related to duplicates? (No, I don't plan to spend any time running such an experiment any time soon.)

- tye        

  • Comment on Re^5: In-place sort with order assignment (-u)

Replies are listed 'Best First'.
Re^6: In-place sort with order assignment (-u)
by BrowserUk (Patriarch) on Sep 21, 2010 at 08:37 UTC
    Is "/usr/bin/sort -u" really so dumb as to sort all of the duplicates and then only eliminate them at the end?

    Yes.

    The 17 second difference between sorting 10e6 records containing just 100 values, with & without -u, is all down to only having to write 100 records to the final output file.

    If it were only sorting 100 records, it would take far less time.

    C:\test>perl -E"say int( rand 100 ) for 1 .. 10e6" > 10e6(of100).dat C:\test>timeit \perl64\bin\sort.exe -u -o sorted "10e6(of100).dat" Took: 70.141548560 seconds C:\test>timeit \perl64\bin\sort.exe -o sorted "10e6(of100).dat" Took: 87.626558680 seconds

    The salient sub mergefps() (called from merge(), called from sort(), called from main()) is:

    static void mergefps (char **files, register int nfiles, FILE *ofp, const char *output_file) { FILE *fps[NMERGE]; /* Input streams for each file. */ struct buffer buffer[NMERGE]; /* Input buffers for each file. */ struct line saved; /* Saved line storage for unique check. */ struct line const *savedline = NULL; /* &saved if there is a saved line. */ size_t savealloc = 0; /* Size allocated for the saved line. * +/ struct line const *cur[NMERGE]; /* Current line in each line table. +*/ struct line const *base[NMERGE]; /* Base of each line table. */ int ord[NMERGE]; /* Table representing a permutation of fps, such that cur[ord[0]] is the smallest line and will be next output. */ register int i, j, t; struct keyfield *key = keylist; saved.text = NULL; /* Read initial lines from each input file. */ for (i = 0; i < nfiles; ) { fps[i] = xfopen (files[i], "r"); initbuf (&buffer[i], sizeof (struct line), MAX (merge_buffer_size, sort_size / nfiles)); if (fillbuf (&buffer[i], fps[i], files[i])) { struct line const *linelim = buffer_linelim (&buffer[i]); cur[i] = linelim - 1; base[i] = linelim - buffer[i].nlines; i++; } else { /* fps[i] is empty; eliminate it from future consideration. */ xfclose (fps[i], files[i]); zaptemp (files[i]); free (buffer[i].buf); --nfiles; for (j = i; j < nfiles; ++j) files[j] = files[j + 1]; } } if (! ofp) ofp = xfopen (output_file, "w"); /* Set up the ord table according to comparisons among input lines. Since this only reorders two items if one is strictly greater tha +n the other, it is stable. */ for (i = 0; i < nfiles; ++i) ord[i] = i; for (i = 1; i < nfiles; ++i) if (0 < compare (cur[ord[i - 1]], cur[ord[i]])) t = ord[i - 1], ord[i - 1] = ord[i], ord[i] = t, i = 0; /* Repeatedly output the smallest line until no input remains. */ while (nfiles) { struct line const *smallest = cur[ord[0]]; /* If uniquified output is turned on, output only the first of an identical series of lines. */ if (unique)

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      If it were only sorting 100 records, it would take far less time.

      Just as an aside, I have to reply to this paragraph with, of course, "Duh". :)

      The 17 second difference between sorting 10e6 records containing just 100 values, with & without -u, is all down to only having to write 100 records to the final output file.

      If that is your thesis, then you picked a really lousy experiment to test it. Luckily, it is easy to disprove.

      perl -E"say int(rand(1e6)) for 1..1e6" | .\time .\sort >nul 3.65 user 3.73 user 3.65 user 0.07 system 0.07 system 0.09 system 7.63 elapsed 7.61 elapsed 7.57 elapsed perl -E"say int(rand(1e5)).9 for 1..1e6" | .\time .\sort >nul 3.56 user 3.48 user 3.40 user 0.07 system 0.18 system 0.12 system 5.17 elapsed 5.19 elapsed 5.21 elapsed perl -E"say int(rand(1e4)).99 for 1..1e6" | .\time .\sort >nul 3.32 user 3.37 user 3.34 user 0.04 system 0.06 system 0.06 system 4.97 elapsed 5.12 elapsed 4.96 elapsed perl -E"say int(rand(1e3)).999 for 1..1e6" | .\time .\sort >nul 3.17 user 3.09 user 3.20 user 0.07 system 0.09 system 0.06 system 4.79 elapsed 4.84 elapsed 4.82 elapsed perl -E"say int(rand(1e2)).9999 for 1..1e6" | .\time .\sort >nul 2.84 user 2.74 user 2.92 user 0.07 system 0.20 system 0.03 system 4.47 elapsed 4.62 elapsed 4.53 elapsed perl -E"say int(rand(10)).99999 for 1..1e6" | .\time .\sort >nul 2.57 user 2.54 user 2.46 user 0.09 system 0.03 system 0.06 system 4.07 elapsed 4.07 elapsed 4.09 elapsed perl -E"say int(rand(1)).999999 for 1..1e6" | .\time .\sort >nul 1.60 user 1.60 user 1.60 user 0.04 system 0.06 system 0.10 system 3.15 elapsed 3.16 elapsed 3.12 elapsed

      Those runs all lack "-u" and so all have identical amounts of I/O and identical numbers of records to sort. The only difference is how many duplicates are present. Clearly, duplicates result in a non-trivial speed-up (even with a "dumb" implementation of "sort -u", one not even doing "-u") even at only 10x duplicates and not due to reduced I/O.

      Is that really the only application of unique in the source code? If so, don't you also find it sadly dumb that the routine that wrote to those files that are to be merged didn't also trivially avoid writing duplicates?

      - tye        

        If that is your thesis,

        It isn't. Just a guess that fits the observed data.

        you picked a really lousy experiment to test it.

        Had that been my purpose, I would have agreed with you.

        Luckily, it is easy to disprove.

        No point. You're disproving the wrong thing.

        The test was simply to assess what reduction in time availed by -u when sorting a file with a high proportion of dups.

        What it proved is: negligable!


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re^6: In-place sort with order assignment (-u)
by JavaFan (Canon) on Sep 21, 2010 at 11:50 UTC
    Is "/usr/bin/sort -u" really so dumb as to sort all of the duplicates and then only eliminate them at the end?
    Yes.

    A little history. Not so long ago, memory was expensive. Computers didn't have multi-Gb RAM available. At best, they had a few Mb, to be shared with dozens, if not hundreds, of users and their programs. sort was written with this mindset: sorting huge amount of data, while being able to do this without consuming heaps of memory. To eliminate duplicates without first sorting, you need something like a hash - which either means keeping all the data in memory, or use really complex structured files.

    A lower bound would be O(D+U*log(U)) and I suspect that is a pretty close bound.
    Can you eliminate duplicates in ο(N * log N) worst case? (That's little-oh, not big-oh) If not, the O(D * log D) is a tight lowerbound.
      A little history.

      Thanks for the condescension.

      To eliminate duplicates without first sorting, you need something like a hash

      A little much-more-recent history that you appear to have completely missed:

      I'd expect a modified sorting algorithm that eliminates duplicates whenever they "touch".

      Not to mention that I already touched on the problem with "something like a hash".

      Some less recent history: The earliest versions of 'sort' lacked the '-u' option. Of course, one could use "sort | uniq" to ignore duplicates. So somebody implemented the "-u" option. I stand by the assertion of "so dumb" (or "so very lazy") to implement "-u" by only filtering output at the very last step.

      Can you eliminate duplicates in ο(N * log N) worst case? (That's little-oh, not big-oh) If not, the O(D * log D) is a tight lowerbound.

      I already noted that sorting D exact duplicates is O(D) without a single change to the algorithm. So in the extreme case of U==1, O(D*log(D)) is obviously not a lower bound. Having spent more time considering this from different angles, I'm nearly convinced that eliminating duplicates while sorting (not the false dichotomy of "either before or after" that you propose) leads to O(D*log(U)) performance.

      - tye        

        I'm nearly convinced that eliminating duplicates while sorting ...leads to O(D*log(U)) performance.

        Do you have an algorithm in mind? One that is compatible with a file-based merge sort?

        I guess if you:

        1. Sort a buffer full; write to spill file eliminating dups as you go; repeat till input read.
        2. Parallel read from spill files, again eliminating dups as you go; until output written.

        Then you'd get: O( N/S log N/S * S) + the merge.

        So for 10e6 (I'm using log10 instead of log2 for easy math):

        • Zero spill files: 10e6 * 7 = 70e6.
        • 10 spill files: 1e6 * 6 * 10 = 60e6 + merge.
        • 100 spill files: 1e5 * 5 * 100 = 50e6 + merge.
        • 1000 spill files: 1e4 * 4 * 1000 = 40e6 + merge.
        • ...

        Of course, the merge step gets more expensive the more files you have. It is at least O(N/S log S) regardless of the ratio of U to N, so I don't think that you are going to approach O(N log U). Much less O(D Log U).