in reply to Randomizing Big Files

What kind of constraints do you have? Does each process needs to get the same amount of bytes? The same amount of lines? Or if you have N processes, will approx. 1/N of the lines do? Can the lines be fed in the same order as in the file, or does that have to be random as well? Can you just chop the file into N equal pieces and shuffle those? If not, why not? Would the following algorithm do?
  1. Let the number of lines in the file be L. Let the number of processes be N. Let Ai = 0 for 0 <= i < N. Label the processes P_i, 0 <= i < N.
  2. For each line l, 0 <= l < L, assign this line to process P_i with chance (L/N - A_i)/(L - l). For the process P_j which is picked this way, increment A_j.
  3. If necessary, shuffle the lines assigned to each process.

Some pseudo, untested, code:

my $L = ... number of lines ....; my $N = ... number of processes ....; # Open filehandles for each process: my @fh; for (my $i = 0; $i < $N; $i ++) { open $fh[$i], "| $process" or die; } # Initialize @A: my @A = (0) x $N; my $l = 0; # Iterate over the input. while (<$input>) { # Pick a number. my $r = rand($L - $l); # Find process. my $i = 0; while ($r >= ($L / $N - $A[$i])) { $r -= $L / $N - $A[$i]; $i ++; } # Write line. print $fh[$i] $_; # Increment array. $A[$i]++; # Increment line counter; $l++ }

Replies are listed 'Best First'.
Re^2: Randomizing Big Files
by Anonymous Monk on Jan 26, 2005 at 15:41 UTC
    ... my $r = rand($L - $l); ...
    This just make possible to sort the same line 2 times, also you are not sure if you will get all the lines!!!!!!!!