in reply to Random Math Question
That's a big number. Um... quick google search on "factorial approximation" shows that ln(100_000!) is about 100_000*ln(100_000)-100_000, and e is about 2, so you need about 100_000*ln(100_000) which is about 1.2 million bits.
That's really not all that much information -- but way more than a computer can typically provide in a short amount of time. It's not a language issue; in fact, it's more of a hardware issue. Just like us, computers can only perceive the world through external devices (senses), and so they can't "make up" bits of entropy (aka truly random numbers) on their own. They need some sort of device that feeds in bits of entropy gradually from thermal noise or whatever.
Linux implements this, in the form of /dev/random. It pulls entropy from IRQ timings, mouse movements, keyboard event interarrival times, and more stuff that I don't understand. I see one person who made his disks go crazy to measure the rate of entropy "production", and he saw 1-1.5 KB/sec. So that would take 20 minutes to come up with enough bits to randomize your list of 100_000 things. Not infeasible, though most applications would rather not wait that long!
Of course, such truly random numbers generally aren't distinguishable from good pseudorandom numbers, so I wanted to mention my favorite way of gauging randomness:
In the early, early days of the web, I stumbled across a site (I think it was an HTML site; can't remember for sure) with a random number contest. People would send in their opinions of the "most random number" between 1 and 100, and after a suitable period of time waiting for all submissions, the winner would be announced -- whichever number was chosen by the fewest submitters.
Works for me.
|
---|