The flaw (as I understand it) is that lines appearing earlier in the file get selected more often than lines appearing later in the file, which leads to some failure to ensure that all possible shuffles are equally likely.
To do it right, you have to read all the lines in, and then apply a shuffling function to each entry. The canonical algorithm to do this is the Fisher-Yates shuffle.
You would want to do something like:
my @file = <>; my $i = @file; while( $i-- ) { my $j = int rand(1+$i); @file[$i, $j] = @file[$j, $i]; } print @file;
As it turns out, this is one of the rarer class of algorithms that exist where you must slurp the entire file into memory in order to process it.
update: I think MeowChow's and Screamer's solutions are ok, but they do have the effect of destroying the array. If all you need to do is print the lines then that is sufficient. Also, I'm not sure that creating repeated new arrays from the splice operation is wonderfully efficient. I haven't benchmarked it, it's just a gut feeling.
If you need to process the array in other ways, however, before writing it out then you have to push the spliced-out array elements onto another array, which means even more gymnastics going on behind the scenes.
In reply to Re: Mixing Up a Text File
by grinder
in thread Mixing Up a Text File
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |