in reply to Using keys to increase hash efficiency
Also, if you only preallocate to 10000 buckets and you're inserting 10000 keys, it's going to need to reallocate at least once anyway (the largest reallocation, therefore the slowest?). A better test might be to preallocate 20000 buckets, and then do 10000 insertions.
Does anyone know how clever perl's hashing function is? And how does Perl deal with key hash collisions? How feasible is it that Perl might be getting a significant enough number of hash collisions that the number of buckets in the hash is insignificant?
Alan
Update: Thanks for the pointer to the information about the Perl hashing fucntion. After reading that, I can confidently say that in Older versions of Perl, inserting sequential keys would cause a lot of hash collisions, but with the new hashing function, that shouldn't be a problem.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
RE: Re: Using keys to increase hash efficiency
by Anonymous Monk on Jul 29, 2000 at 02:39 UTC |