I need a random element from a huge hash, one large enough that using keys to get a list of elements cripples my system. (Why didn't I use an array? I didn't plan to have to do this, and most of what I'm doing with this thing involves pulling single records by name.)
I looked at the answers at Pulling random elements from a hash, but they involve the keys method too. I'm thinking what I'm going to do, since this only needs to run once, is iterate through once using each to find the total number of elements, generate unique random numbers in that range, and then iterate through again, grabbing those records as I go.
This seems gross, but I backed myself into the corner of having to do it. I was wondering, though, if some of the minds here had a cleaner solution.
Side question: perldoc states that, for a given run of perl and an unmodified hash, each, keys, and values are guaranteed to have the same ordering as each other. What about repeated iterations of the same type? For example, if I use keys, process the results without changing the hash, and then use keys again, do I get the same ordering? My testing says yes, but is it guaranteed?
In reply to Random hash element (low memory edition). by amarquis
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |