1)Create an array with values 1 to 3.5 million. Randomize th.is array. Then start at the first element of the array, if it is (say) 1376, read the 1367th line from the file and write it to the new file. Then read the next number from the array, and it's corresponding line from the old file and so on.
2)Read in the first 100,000 or so lines, randomize them and write them to a temp file. Read the second 100,000, randomize, write and so on. When all 3.5 million are in 35 new files, select a random number between 1 and 35 and then read the next line of that file and put it on the end of your final file. This way they are not in random order in 100,000 line chunks, but are more evenly distributed.
The first method is just as 'random' as your normal method, because it is the same, but much slower. There are no memory problems though.
It think could be shown that the second method is also rather 'random' but it is different and perhaps not as 'acceptable' until you did a formal proof.
In reply to Re: Randomize lines with limited memory
by scmason
in thread Randomize lines with limited memory
by natch
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |