in reply to Efficient giant hashes

If you suddenly see a notably drop in performance, it's most likely you have hit the treshold where it starts swapping. I'm a bit surprised that you hit the limit so soon, what are you storing in your hash?
use Devel::Size 'total_size'; my %hash = map {$_, rand() . ""} 1 .. 100_000; print total_size(\%hash), "\n"; __END__ 7713194
That's less than 8Mb, which shouldn't be much of a problem on a modern machine.

But fact is, Perl is memory hungry. The more structures you have, the more memory you use. The more complex the structures are, the more memory you use.

Speeding it up is only possible by using less memory at a time. Using tie as you propose it is not going to solve it. If there would be a known way of speeding up hash access, it would already be in the core! Not to mention that tieing is a slow mechanism, as it means that for each hash access, no matter how trivial, a Perl subroutine will be called. It is possible to use the tie mechanism to store the hash on disk instead of memory, but unless you otherwise would run out of memory, that's not going to change your performance for the better. Regular disk access is not likely to be faster than accessing your swap area.

Replies are listed 'Best First'.
Re^2: Efficient giant hashes
by perlfan (Parson) on Mar 10, 2005 at 18:43 UTC
    Speeding it up is only possible by using less memory at a time.

    Not if what he is trying to do is has a complexity greater than O(n), i.e., if he is entering into another loop of some kind for each element (or even for just every ith element).
      Well, yeah, he could do anything else in his program that could be made faster. But since we don't know the rest of his program, nor is his question about that, it's nothing more than pure speculation, and rather pointless.

      Perhaps he's recompiling perl each time he inserts a hash element.