in reply to Re: Randomizing Big Files
in thread Randomizing Big Files

If you split the big file in small files, than randomize each small file you actually are not makeing a full randomization!

Is like split the dict by the 1st letter making groups from A to Z and randomize each group, so the a... words will always be in the begining of the data. What I want is to distribute all the data over the file, so when the process is reading the list from the top it always will be reading a randomized sequence with the same probability to get any word of the full file.

Replies are listed 'Best First'.
Re^3: Randomizing Big Files
by Boots111 (Hermit) on Jan 27, 2005 at 04:03 UTC
    All~

    Actually you can make this work. You use the same technique as a split-and-merge sort. After you have randomized each subgroup, you collect them together into one big random group by randomly selecting which group to take the first line from until they are empty.

    This might even be a good way to do something like this as split and merge is the technique employed by databases to sort things that wont fit in memory.

    First break the file into memory sized chunks and use the Fisher-Yates Shuffle to randomize them. Then take the randomized runs and shuffle them together by randomly selecting which one to take from next. Repeat the process until you have one one very large random run. I know it will provide an even distribution if you combine all of the random runs simultaneously, I have yet to convince myself whether or not it will still provide even distributions if your shuffle only takes two random runs.

    UPDATE: I have convinced myself that you can do it by repeatedly shuffling together two runs, as long as you select the runs to shuffle together randomly every time.

    Boots
    ---
    Computer science is merely the post-Turing decline of formal systems theory.
    --???