in reply to split to hash, problem with random keys

It's pseudo-random because that's the way hash tables get stored. Basically, it converts the key string into an integer, which can then be used to point directly to a position in memory. The result is that the speed of a hash lookup stays the same no matter how big it gets (except in cases where different keys hash to the same value, but there are ways around that). Compare this to searching a regular array, where the best algorithms have a worst-case running time of O(log n). Not bad, but not nearly as good as a hash table (which is O(1), excepting collisions).

When you call keys %hash, it is returning the data sorted by the value of the hashed key. Note that the algorithm used to do the hashing could change whenever somebody feels like it (it recently changed in Perl 5.8.0), so don't rely on the order staying the same everywhere.

The NIST web site has a decent entry (external link) for a definition of a hash table, with some decent links on how they're implemented.

----
I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer

Note: All code is untested, unless otherwise stated