in reply to Populating a hash

I believe (not 100% sure though) that in your case (as you are putting one million keys into your hash and I assume you want to get them out again later) you should preallocate space by telling perl you need more buckets -i.e.
my %hash; keys(%hash)=1_000_000;
The number you assign to keys is rounded up to the next power of two.

My understanding is (I would be interested to learn if this is correct) that when you have more buckets you have less collisions (i.e. entries that fall into the same bucket) and so retrieval of an entry is more efficient.

Replies are listed 'Best First'.
Re^2: Populating a hash
by BrowserUk (Patriarch) on Mar 11, 2012 at 15:43 UTC

    Because of the way $a*$b works, he's creating somewhat less that 250,000 keys. Many are incremented multiple times.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

    The start of some sanity?

      You are right of course.

      And all the primes above 1000 don't get incremented at all...

      So would preallocating buckets be helpful or hurtful here (or would it not matter at all)?

        So would preallocating buckets be helpful or hurtful here

        If you know the final size, preallocating to that size will certainly do no harm and speed things a little.

        Hashes get 8 buckets to start, then double in size each time as they fill. When the doubling occurs, the key/value pairs have to be copied from the old to the new hash which is a relatively expensive operation. Moving stright to the final number of buckets avoids that.

        Of course, even if you pre-sized the hash to 2^18(262144) buckets, there's no guarantee that will be the final size. It could be that hash collisions at that size will prompt another doubling dependant upon the keys.

        You could go straight to the next size up directly to be safe, but then you are potentially wasting memory. Which if the ultimate criteria is speed may be acceptable.

        But without knowing the OPs actual criteria there's no way to know for sure.

        If runtime performance is his only goal, using an array rather than a hash would be twice as fast. And despite being sparse, would actually use less memory.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        The start of some sanity?