Can one or more wise monks advise on this issue?
I have a very large hash (over 3 GB) that may be accessed 100,000 times or more during the life of a script (once for each record input to the script). Instead of accessing the large hash so many times, each time I run the script I could build a much smaller hash, with at most a few thousand keys, based on the initial response to the search of the large hash. I could then see if a key exists in the small hash before even referencing the large hash.
Since hash access is normally O(1), this may be a non-starter of an idea. But the large hash does have collisions, and I would like to find a way to speed up my script (no, haven't done any profiling). For what it's worth, the values in the large hash are strings of perhaps 80 characters.
Now if I created a small hash, it would contain only a dozen or so values, so I am concerned about hash collisions. One way to avoid collisions in my small hash would be to make each value an array (probably a reference to an anonymous array) and make the second element of the array a unique value. That seems rather kludgy to me, but I should think it would eliminate hash collisions at the price of significant overhead in creating the smaller hash.So, to my questions. Is building a small hash based on results of accessing a large hash likely to help speed up my script? If so, what would be the best way to ensure that the smaller hash has unique values given that the values of interest are just short ASCII stings that are not at all unique?
A blessing on all monks with insight.
In reply to Small Hash a Gateway to Large Hash? by lsherwood
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |