in reply to Comparing two hash tables

There is no way to answer your question. Sure, if you're working with smaller hashes, maybe your program's performance would improve. Maybe it would degrade, though, depending upon how many buckets the keys would be allocated to (unlikely, but this is just an illustration). That would depend upon the keys. Further, if it takes a longer to delete those keys than the savings you would get from working with a smaller hash, then it's a waste of time. And what about the value of your time? If this program runs once a week for about 30 minutes, is it really worth it?

In any event, figure out what's important to you and if you think that deleting those keys might help, check out the Benchmark module. It comes standard with Perl.

Cheers,
Ovid

Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.

Replies are listed 'Best First'.
Re: (Ovid) Re: Comparing two hash tables
by Rajiv (Initiate) on Mar 28, 2002 at 22:01 UTC
    Thanks for your reply The reason why i am trying to delete the similar keys is that gradually the the search for the next similar would become faster owing to lesser number of keys left. I will also try calculating the time taken using Benchmark module.
      The reason why i am trying to delete the similar keys is that gradually the the search for the next similar would become faster owing to lesser number of keys left
      The time to access a hash member does not depend on the number of keys, but on whether the distribution of the hash keys into the buckets is uniform or not.

      /prakash

        So how can i ensure whether the distribution of keys in the buckets is uniform or not ? Please suggest Thanks