Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re: Re: A short meditation about hash search performance

by pg (Canon)
on Nov 16, 2003 at 03:19 UTC ( [id://307414]=note: print w/replies, xml ) Need Help??


in reply to Re: A short meditation about hash search performance
in thread A short meditation about hash search performance

"And a billion differs from 1 only by a constant - so that's O(1)"

You obviously don't understand what O(1) means.

Say we have an array of 1 billion elements. Let's look at two different search algorithms:

  1. Search from beginning to end, going thru each element one by one, until hit what you are searching for. In the worst case (the element is at the end of the array), you have to hit 1 billion elements, but according to you, that's O(1). I say it is O(n). We never put a restriction saying that an array can at most contain 1 billion elements (so the size of an array in general is not a constant, although it is a constant for a given array at one given observation point.)
  2. Do a binay search, in the worst case, you have to hit log2(1 billion) ~ 30 times. I call this O(log2(n)), according to you it is also O(1).

As everyone knows, the performance of those two approaches are so different, but according to your theory, they are both O(1)! The math here is so off! Well... I certainly don't mind if you insist your idea, but please don't confuse the general public.

What you said would be right, if we put a restriction saying that a hash can contain at most 1 billion elements. As O(1 billion) has the same complexity as O(1), even though 1 billion is much bigger than 1.

However O(n) is more complex than O(1 billion), even comparing with O(1 billion ** 1 billion), O(n) is still more complex. Why? because n is a variable, which can go to unlimit. 1 billion ** 1billion is huge, but n is going to unlimit, and evetually it will pass 1 billion ** 1 billion. In our context, please remember that, the size of a hash is a variable (that potentially goes to unlimit), and your analysis has to reflect this fact. Don't confuse it with the size of a given hash at a given time.

  • Comment on Re: Re: A short meditation about hash search performance

Replies are listed 'Best First'.
Re: A short meditation about hash search performance
by Abigail-II (Bishop) on Nov 16, 2003 at 23:03 UTC
    You obviously don't understand what O(1) means.
    Let's see. The definition of big O is:
    f(n) = O (g (n)) iff there are a M > 0 and a c > 0 such that for all m > M, 0 <= f(m) <= c * g (m). [1] [ +2] [3]
    I don't have any problem understanding with it. In layman terms, it means that a function f of n is in the order of g of n, if, and only if, there's a constant, such that if n gets large enough, the value of f is at most the value of g times said constant.
    Search from beginning to end, going thru each element one by one, until hit what you are searching for. In the worst case (the element is at the end of the array), you have to hit 1 billion elements, but according to you, that's O(1). I say it is O(n). We never put a restriction saying that an array can at most contain 1 billion elements (so the size of an array in general is not a constant, although it is a constant for a given array at one given observation point.)
    Hello? We never put a restriction on the size? Come again. What do you call:
    And still O(1) is not reachable, unless each element resolve a unique key ;-)
    That's a restriction of 1. You started out by putting restrictions on it, claiming that only if there's a restriction of a size of 1, the search algorithm is O (1). I on the other hand pointed out that as long as there is a restriction on the limit of the chain, it doesn't matter what the restriction is, 1, 14 (for 5.8.2), or a billion. If there's a restriction on the size, even with a linear search it's O (1). Here's a proof:
    Suppose the chain is limited to length K, where K is a constant, independent of the amount of keys in the hash. Searching for a key is a two step process: first we need to find the bucket the key hashes to, then we need to find the key in the associated chain. Finding the right bucket takes constant time. Traversing the chain takes at most K * e time, for some constant e. So, searching for the element takes at most:
                                   e * K + O (1),  e >= 0
         {definition of O()}  <=   e * K + d * 1,  e >= 0, d >= 0
         {arithmetic}         ==  (e * K + d) * 1, e >= 0, d >= 0
         {c == e * K + d}     ==   c * 1
         {c > 0}              ==   O (1).
                                                            q.e.d.   
    
    I won't deny the performance will be rather lousy, but it's still O (1). Which proves that big-Oh doesn't say everything.
    [1]
    Cormen, Leiserson, and Rivest: Introduction to Algorithms. MIT Press, 1990. pp 26.
    [2]
    Knuth: The Art of Computer Programming, Volume 1: Fundamental Algorithms. Third Edition. Addison-Wesley, 1997. pp 107.
    [3]
    Sedgewick, and Flajolet: Analysis of Algorithms. Addison-Wesley, 1996. pp 4.

    Abigail

      Abigail, I have two minor questions for you. First off you speak of finding the correct bucket as occurring in constant time. Given that the time to calculate the bucket value is dependent on the length of the key I dont quite see how this is correct. Or does this factor disappear because it averages to a constant time in normal use? I have a similar concern about the doubling of the buckets during insertion. My by now hazy recollection of big O() says that this behaviour is signifigant and should be included in the O() of hash insertion. Is this wrong? If its not wrong how would it be calculated? I havent the foggiest how you would calculate the effect of a factor that comes into play so rarely. Or is it again that it averages to 0 and so can be left out of the equation?


      ---
      demerphq

        First they ignore you, then they laugh at you, then they fight you, then you win.
        -- Gandhi


        Yes, you are right that length of a key plays a role in calculating the hash value. And it plays a role in comparing two keys as well. You can take this time into account, and say insertion/searching takes O (k), where k is the length of input. There's no relationship between k and n though, and usually we aren't interested in this factor. We just define that calculating a hash value can be done in constant time, and so can comparing two keys.

        As for the doubling, this factors out (assuming it isn't possible to construct a set of keys such that even after repeated doubling, they keep hashing to the same values). The sketch of the proof is as follows: suppose we rebuild the hash after N inserts; that is, the hash was rebuild when it contains N keys. Building a hash out of N keys will take O (N) time - this is O (1) per key. Now you also have to show that the next rebuild isn't taking place after inserting another c * N keys, for some constant c > 1. This means that if you rebuild a hash with N keys, for at least N / (1 + c) keys, this is the first time they are involved, for at least N / (1 + c)^2 keys, this is the second time they are involved in a rebuild, etc. If you do the math, you will see that there are some keys that have been charged O (log N) on rebuilds, but because there are so many more who have been charged less, it works out to O (1) amortized time. So, yes, a single insert can take O (1) time, but, starting from an empty hash, N inserts take O (N) time.

        Rebuilding after a bunch of inserts is actually a well-known technique for datastructures. Often a datastructure is only partially rebuild (giving the technique its name: "partially rebuilding"). Rebuilding the entire datastructure is just an extreme variant.

        Abigail

Re: Re: Re: A short meditation about hash search performance
by Boots111 (Hermit) on Nov 16, 2003 at 20:40 UTC
    All~

    I am just refering to the two posts immediately above this, but I must point out that pg is correct. Despite what the points on either node may say...

    The size of a hashtable is a variable (usually n), and the pathelogical case of inserting everything into the same bucket provides O(n) access for a simple hashtable.

    The only way in which Abigail would be correct is if there were guarantee that the overflow chain would NEVER exceed one billion entries.

    It is possible that the rehashing will prevent overflow chains from growing too large, but then one must consider the cost of rehashing the table. While that cost is not paid every time, it is likely a very large cost, and thus must be amortized across all calls to insert.

    In general, one could get O(1) access to a hash by ensuring that the overflow chains reach at most a constant length, but this will require rehashing when chains get too long. This would cause hash insertions to be greater than O(1).

    At heart it is a question of trading one cost for another...

    Boots
    ---
    Computer science is merely the post-Turing decline of formal systems theory.
    --???

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://307414]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others learning in the Monastery: (4)
As of 2024-04-20 00:57 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found