in reply to Re: Strategy for randomizing large files via sysseek
in thread Strategy for randomizing large files via sysseek

Why trade an O(2N) solution for an O(N)+O(N log N)* one?

*At best


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
  • Comment on Re^2: Strategy for randomizing large files via sysseek

Replies are listed 'Best First'.
Re^3: Strategy for randomizing large files via sysseek
by bluto (Curate) on Sep 10, 2004 at 15:45 UTC
    The main benefit of course is that there is less code to test/maintain, hence the "low tech" caveat. If it's fast enough for the OP, then it's worth considering. I'm sure the OP can figure this out easily by testing.

    On a lesser note I am not convinced that your solution will fairly shuffle the lines, even after a second pass -- though if the number of temp files and or passes is scaled with the number of total lines this may be sufficient for the OP's needs. (I.e. tempfiles^passes probably needs to equal or exceed total number of lines to sort, but that's a WAG).

      The main benefit of course is that there is less code to test/maintain,...

      I was kind of pleased with how simple the code was.

      On a lesser note I am not convinced that your solution will fairly shuffle the lines, even after a second pass...

      Indeed, you are correct, it does not produce a fair shuffle. It is capable (with a minor correction to the untested code) of producing every possible combination--which I equated in my mind with "fair".

      Thanks for calling me to book and making me think about it harder.

      However, given the OP's info of at least 1GB of data, of maximum length around 50 chars, then there are at least 20 million records. To produce all possible permutations of shuffle is 20000000!. With factorial( 1000 ) looking something like:

      And the processing taking around 15 minutes/GB, is unlikely that the lack of fairness is going to be a factor :).

      That said, the OP's requirement to remove duplicates (rather than just not produce them) means a sort is required anyway, so it all becomes moot.


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
        I was kind of pleased with how simple the code was.

        FWIW, I agree. So while I had ++'d it, I wanted to offer another solution to the OP. As I get older, and probably more cynical, I worry about writing code solutions to problems that can be solved with trivial to implement unix solutions (i.e. I see unix tools as old CPAN modules that still work well).

        I'm not saying that new code is bad, but just that in these types of cases there needs to be a significant advantage in writing code by hand since there is additional brainpower invested (esp. over time) into the solution. Perhaps your solution will significantly outperform the shell script solution for really large files, in that case the extra maintenance cost might be worth it for some.

        Update:Minor format fix.

Re^3: Strategy for randomizing large files via sysseek
by Anonymous Monk on Sep 10, 2004 at 17:58 UTC
    Well, your solution maybe linear - it doesn't remove duplicates, which is part of the requirement.