kbeen has asked for the wisdom of the Perl Monks concerning the following question:

Hi,
I am currently working on a CGI / database search that will be display 5 results per page, and they have to be "random". Actually this just means that each item must have an equal opportunity to appear first in each new search. I have accomplished this easily enough by rearanging the results based on a random number produced each time, then, sending the number to the next page where the db is queried again, so the next page's "random" results are exactly the same as the first, only this time I show the next 5.

my $total_count; #The total number of items returned by the search my $newcount = ($total_count - 1); my $ORDER; #If this is not the first page, this is the original ran +dom number keeping the order between pages if (!$ORDER) { $random_num = int(rand($total_count)) + 1; } else { $random_num = $ORDER; } for ($z = 1; $z < $total_count; $z++) { my $random = $random_num; while ($newcount < $random) { $random = ($random - $newcount); } for ($a = 0; $a <= $count; $a++) { if ($a < $random) { push(@front,$nonrandom[$a]); } if ($a == $random) { push(@final,$nonrandom[$a]); } if ($a < $random) { push(@back,$nonrandom[$a]); } } @nonrandom = (@back, @front); @back = (); @front = (); $count--; } push (@final,$nonrandom[$a]); $thispage = $pagenumber * 5; for ($a = ($thispage - 1); a &lt; ($thispage + 4); $a++) { $thispageresults = $final[$a]; }
Although my solution this time is fine this time, I am sure someone knows a cool way to get truly random results, and pass the same randomness between pages in a CGI script.
I first thought about saving the initial query results, which could be made truely random in a temporary file to be read at each subsiquent call needing that same sequence, but decided that it would be more trouble than it's worth to have to worry about how long the files should be saved, and what would happen if too many visitors did too many differnet searches (with thousands of items returned in the result) filling up my disk space with gigabytes of random replys.

Any ideas and advise would be great!
Kbeen.

Replies are listed 'Best First'.
Re: Predictable Randomness
by Masem (Monsignor) on Aug 11, 2001 at 22:04 UTC
    You need to take advantage of how perl's srand works. If it's passed the same number, then all subsequent rand calls will return the same sequence of random numbers. So what I'd do is:
    • Generate a true random number ( srand on the time, or somesuch, then rand again), or just use time directly.
    • Grab the data from the SQL.
    • Seed srand with your random number above.
    • Do a 'random' sort (preferably with a class like Shuffle.pm); this should remain the same as long as the srand seed is the same.
    • Present needed items.
    • Make sure to bury the srand seed in a hidden field so that you can grab it next time.

    -----------------------------------------------------
    Dr. Michael K. Neylon - mneylon-pm@masemware.com || "You've left the lens cap of your mind on again, Pinky" - The Brain

      Just as a side note, generating random numbers for network usage is much harder than it seems.

      In the above case security and true unpredictability are not fundamental, so it might be enough to feed srand with the current time. It's at least advisable to XOR it (the current time) with the process ID: busy Web server are likely to respond to more than one request per second, which is the resolution returned by time and similar.

      When security is a concern, this RFC can be very useful. It's an interesting reading anyway.

      -- TMTOWTDI

Re: Predictable Randomness
by traveler (Parson) on Aug 12, 2001 at 02:55 UTC
    I've had great results with Algorithm::Numerical::Shuffle. My lists have fewer than 100 elements, though, and I'm not sure how big yours are. If you use it each time you'll get a shuffled list each time, so if you must avoid duplicates, you'll need to keep a per-user list (it seems that your code would need that, too...).

    HTH, --traveler