I will be quick cuz it is 4am here. I was referring to
running time for the big-O complexity.
Space complexity is a different measure but even if we do count it, the complexity of the algorithms is still
O(m+n).
This is because the
key idea in the algorithms is that we traverse the 1st array, create a hash and then test the hash against the 2nd array, which for arrays with
m and
n elements is p*m+q*n, p,q constants, or better O(m+n). If we were
not to use hashes the running time would be O(m*n) because we would have 2 nested loops. Also,the space allocations are not ~2^n but ~logN which is ignored in the O(m+n) notation.
I will
strongly disagree with your statement about big-O notation being done only in academic papers. The notation does not tell you anything about the
actual running time of the algorithm, but it is a hint on how the complexity
scales with the input. Real benchmarks are most important but they do not replace mathematical analysis but supplement it. In simple terms O(m*n) indicates 2 nested loops, while O(m+n) indicates 2 independent loops. That's it. There is no reference to actual time.
Complexity theory is a real treasure when we can apply it properly. Actually, your method is a variation of a
Bloom Filter which is known for being very fast. To overcome the O(m+n) a special structure like a
rooted tree(heap) is needed, because there is no better solution.
This paper (PDF) is one of he many proofs that the union problem is linear on the input.
Finally the new version
buk2() may be faster but
drops a key advantage of the previous, which is the ability to handle duplicate array elements. Still your algorithm is the
killer one and I haven't think of anything better.
That's all for now. I am going for some sleep and come back tomorrow, hopefully with some code on the problem. Till then I will use big-O notation to count the sheep!