sub RandomiseArray {
my @a=@_;
my @b=();
while (scalar @a) {
push(@b,splice(@a,int(rand @a)));
}
return(@b);
}
As the array becomes longer, the probablility of your routine finding an unused space becomes smaller and smaller.
With a 1..10 shuffle the probablility of finding the right element for the last space is one in ten - which means you will need 10 tries (on the average). Not too bad. But with a 1..100000 array, you will need 100_000 tries to fill in the last element. and 99_999 to find the next to last, and... you get the idea. It will take a LOOONG time.
Yeah, I could do it inplace (without the second copy of the array), but I'm going to save that for another time. | [reply] [d/l] |
The above comment would be true if it did $c = int(rand @_), but it doesn't. It just uses the floating point value returned from rand as a hash key. There are unlikely to be any collisions, even shuffling 1000000 numbers. You can test this with:
perl -wle'undef $x{rand()} while keys(%x) == $x++; print $x'
which adds keys until it gets a collision. Be prepared to wait a very long time. | [reply] [d/l] [select] |
Though the books always say that the hash is traversed in a "random" order, you should not take that to mean it is traversed in a TRULY RANDOM order. They just meant you should not write your program such that it will depend on any PARTICULAR order.
A hashtable implementation might decide to visit the values %foo in a key order which looks suspiciously artificial: 0.05, 0.10, 0.15, 0.20, 0.06, 0.11, 0.16, 0.21, etc. You are hoping that the hashtable implementation doesn't actually bear any relationship to the pseudo-random number generator. Well, as seen in a recent security bug-fix, hash functions CAN be tied to random functions. You might end up aligning the two functions and defeating any appearance of unpredictability.
This hashtable technique is interesting, but I wouldn't trust it to return things in a random order.
-- [ e d @ h a l l e y . c c ]
| [reply] [d/l] |