in reply to Efficiently selecting a random, weighted element
And blam-o, you're set up to choose your next file, it's as if file_d.txt never existed, and you can repeat ad infinitum until you've selected enough files out
Depending on the number of files, what about just repetitively picking additional files until you get something different from the first? Your "pick" algorithm is fast, so why splice out a file and recompute offsets each time?
If you're picking a high percentage of the total files, then you'll be doing lots of useless picks of files already chosen, but if you're picking 2 of 300 files, it should work pretty well.
-xdg
Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Efficiently selecting a random, weighted element
by jimt (Chaplain) on Oct 15, 2006 at 12:41 UTC | |
by xdg (Monsignor) on Oct 15, 2006 at 13:23 UTC | |
Re^2: Efficiently selecting a random, weighted element
by an0 (Initiate) on Oct 15, 2006 at 08:54 UTC |