in reply to Re^5: In-place sort with order assignment (-u)
in thread In-place sort with order assignment

Is "/usr/bin/sort -u" really so dumb as to sort all of the duplicates and then only eliminate them at the end?

Yes.

The 17 second difference between sorting 10e6 records containing just 100 values, with & without -u, is all down to only having to write 100 records to the final output file.

If it were only sorting 100 records, it would take far less time.

C:\test>perl -E"say int( rand 100 ) for 1 .. 10e6" > 10e6(of100).dat C:\test>timeit \perl64\bin\sort.exe -u -o sorted "10e6(of100).dat" Took: 70.141548560 seconds C:\test>timeit \perl64\bin\sort.exe -o sorted "10e6(of100).dat" Took: 87.626558680 seconds

The salient sub mergefps() (called from merge(), called from sort(), called from main()) is:

static void mergefps (char **files, register int nfiles, FILE *ofp, const char *output_file) { FILE *fps[NMERGE]; /* Input streams for each file. */ struct buffer buffer[NMERGE]; /* Input buffers for each file. */ struct line saved; /* Saved line storage for unique check. */ struct line const *savedline = NULL; /* &saved if there is a saved line. */ size_t savealloc = 0; /* Size allocated for the saved line. * +/ struct line const *cur[NMERGE]; /* Current line in each line table. +*/ struct line const *base[NMERGE]; /* Base of each line table. */ int ord[NMERGE]; /* Table representing a permutation of fps, such that cur[ord[0]] is the smallest line and will be next output. */ register int i, j, t; struct keyfield *key = keylist; saved.text = NULL; /* Read initial lines from each input file. */ for (i = 0; i < nfiles; ) { fps[i] = xfopen (files[i], "r"); initbuf (&buffer[i], sizeof (struct line), MAX (merge_buffer_size, sort_size / nfiles)); if (fillbuf (&buffer[i], fps[i], files[i])) { struct line const *linelim = buffer_linelim (&buffer[i]); cur[i] = linelim - 1; base[i] = linelim - buffer[i].nlines; i++; } else { /* fps[i] is empty; eliminate it from future consideration. */ xfclose (fps[i], files[i]); zaptemp (files[i]); free (buffer[i].buf); --nfiles; for (j = i; j < nfiles; ++j) files[j] = files[j + 1]; } } if (! ofp) ofp = xfopen (output_file, "w"); /* Set up the ord table according to comparisons among input lines. Since this only reorders two items if one is strictly greater tha +n the other, it is stable. */ for (i = 0; i < nfiles; ++i) ord[i] = i; for (i = 1; i < nfiles; ++i) if (0 < compare (cur[ord[i - 1]], cur[ord[i]])) t = ord[i - 1], ord[i - 1] = ord[i], ord[i] = t, i = 0; /* Repeatedly output the smallest line until no input remains. */ while (nfiles) { struct line const *smallest = cur[ord[0]]; /* If uniquified output is turned on, output only the first of an identical series of lines. */ if (unique)

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP an inspiration; A true Folk's Guy

Replies are listed 'Best First'.
Re^7: In-place sort with order assignment (data)
by tye (Sage) on Sep 21, 2010 at 16:43 UTC
    If it were only sorting 100 records, it would take far less time.

    Just as an aside, I have to reply to this paragraph with, of course, "Duh". :)

    The 17 second difference between sorting 10e6 records containing just 100 values, with & without -u, is all down to only having to write 100 records to the final output file.

    If that is your thesis, then you picked a really lousy experiment to test it. Luckily, it is easy to disprove.

    perl -E"say int(rand(1e6)) for 1..1e6" | .\time .\sort >nul 3.65 user 3.73 user 3.65 user 0.07 system 0.07 system 0.09 system 7.63 elapsed 7.61 elapsed 7.57 elapsed perl -E"say int(rand(1e5)).9 for 1..1e6" | .\time .\sort >nul 3.56 user 3.48 user 3.40 user 0.07 system 0.18 system 0.12 system 5.17 elapsed 5.19 elapsed 5.21 elapsed perl -E"say int(rand(1e4)).99 for 1..1e6" | .\time .\sort >nul 3.32 user 3.37 user 3.34 user 0.04 system 0.06 system 0.06 system 4.97 elapsed 5.12 elapsed 4.96 elapsed perl -E"say int(rand(1e3)).999 for 1..1e6" | .\time .\sort >nul 3.17 user 3.09 user 3.20 user 0.07 system 0.09 system 0.06 system 4.79 elapsed 4.84 elapsed 4.82 elapsed perl -E"say int(rand(1e2)).9999 for 1..1e6" | .\time .\sort >nul 2.84 user 2.74 user 2.92 user 0.07 system 0.20 system 0.03 system 4.47 elapsed 4.62 elapsed 4.53 elapsed perl -E"say int(rand(10)).99999 for 1..1e6" | .\time .\sort >nul 2.57 user 2.54 user 2.46 user 0.09 system 0.03 system 0.06 system 4.07 elapsed 4.07 elapsed 4.09 elapsed perl -E"say int(rand(1)).999999 for 1..1e6" | .\time .\sort >nul 1.60 user 1.60 user 1.60 user 0.04 system 0.06 system 0.10 system 3.15 elapsed 3.16 elapsed 3.12 elapsed

    Those runs all lack "-u" and so all have identical amounts of I/O and identical numbers of records to sort. The only difference is how many duplicates are present. Clearly, duplicates result in a non-trivial speed-up (even with a "dumb" implementation of "sort -u", one not even doing "-u") even at only 10x duplicates and not due to reduced I/O.

    Is that really the only application of unique in the source code? If so, don't you also find it sadly dumb that the routine that wrote to those files that are to be merged didn't also trivially avoid writing duplicates?

    - tye        

      If that is your thesis,

      It isn't. Just a guess that fits the observed data.

      you picked a really lousy experiment to test it.

      Had that been my purpose, I would have agreed with you.

      Luckily, it is easy to disprove.

      No point. You're disproving the wrong thing.

      The test was simply to assess what reduction in time availed by -u when sorting a file with a high proportion of dups.

      What it proved is: negligable!


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.