in reply to Strategy for randomizing large files via sysseek

One effective method of sorting huge files is a heap sort. read the lines one a time and write them out to a number of files according to which subset of the sorted order the belong to. Eg. All the record where the sort field starts with 'A' go to the A.tmp file, 'B' to B.tmp etc. You then sort those smaller files and then recombine them in the proper order at the end.

You can use a similar strategy to randomise your dataset.

Open 100 files for output. Process all your files one file/one line at at time. Pick an output file at random to write each record to. Rewind all the temp files. shuffle the handles and merge them back to a final file in random order. This is not fair randomisation. The first record in the final file will definitely be one of the records from the first of the original files read.

However, you can then run the program as a second pass supplying the output from the first run. After this second pass, the randomisation will be fair.

As you are only ever processing any given file in sequential order, the processing speed should be just about as quick as it can be.

#! perl -slw use strict; use List::Util qw[ shuffle ]; ## omit the glob if your shell expands wildcards for you. BEGIN{ @ARGV = shuffle map glob, @ARGV } my @temps; open $temps[ $_ ], '>', "tmp/$_.tmp" for 0 .. 99; while( <> ) { print { $temps[ rand 100 ] } $_; } seek $temps[ $_ ], 0, 0 for 0 .. 99; open FINAL, '>', "randomised.all" or die $!; for my $in ( shuffle @temps ) { while( <$in> ) { print FINAL; } close $in; } close FINAL;

*Untested.


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
  • Comment on Re: Strategy for randomizing large files via sysseek (heap unsort)
  • Download Code