in reply to Re^3: Fast, Efficient Union and Intersection on arrays
in thread Fast, Efficient Union and Intersection on arrays

I will be quick cuz it is 4am here. I was referring to running time for the big-O complexity. Space complexity is a different measure but even if we do count it, the complexity of the algorithms is still O(m+n).

This is because the key idea in the algorithms is that we traverse the 1st array, create a hash and then test the hash against the 2nd array, which for arrays with m and n elements is p*m+q*n, p,q constants, or better O(m+n). If we were not to use hashes the running time would be O(m*n) because we would have 2 nested loops. Also,the space allocations are not ~2^n but ~logN which is ignored in the O(m+n) notation.

I will strongly disagree with your statement about big-O notation being done only in academic papers. The notation does not tell you anything about the actual running time of the algorithm, but it is a hint on how the complexity scales with the input. Real benchmarks are most important but they do not replace mathematical analysis but supplement it. In simple terms O(m*n) indicates 2 nested loops, while O(m+n) indicates 2 independent loops. That's it. There is no reference to actual time.

Complexity theory is a real treasure when we can apply it properly. Actually, your method is a variation of a Bloom Filter which is known for being very fast. To overcome the O(m+n) a special structure like a rooted tree(heap) is needed, because there is no better solution. This paper (PDF) is one of he many proofs that the union problem is linear on the input.

Finally the new version buk2() may be faster but drops a key advantage of the previous, which is the ability to handle duplicate array elements. Still your algorithm is the killer one and I haven't think of anything better.

That's all for now. I am going for some sleep and come back tomorrow, hopefully with some code on the problem. Till then I will use big-O notation to count the sheep!

Replies are listed 'Best First'.
Re^5: Fast, Efficient Union and Intersection on arrays
by BrowserUk (Patriarch) on Nov 21, 2008 at 08:25 UTC

    I agree with you that trying to use big-O to compare the time-costs of space complexity is pretty pointless. I was simply following moritz lead with the purpose of trying to make that very point.

    I will strongly disagree with your statement about big-O notation being done only in academic papers.

    That's not what I said. I said: "that big-O is rarely done properly outside of academic papers, which is quite different. Feel free to disagree with what I actually said also, but you might want to read on first.

    What I said is not a million miles away from your own later statement:"Complexity theory is a real treasure when we can apply it properly.".

    The problem is that people will insist on trying to use the analysis of algorithm complexity to compare the time-costs of actual implementations. It simply doesn't work for a variety of reasons.

    • It ignores the reality that in real-world timeclock seconds, a good implementation can be several orders of magnitude better than a bad implementation of exactly the same algorithm.
    • It ignores that an efficient implementation of a "bad algorithm" can trounce an inefficient implementation of a "good" one.
    • It ignores that for some tasks, there are simply no better algorithms (known), and so implementation efficiency is the only game in town.

      For example, there is currently no known better algorithm for cracking RC4 than brute force, and you are never going to challenge 6-seconds in C with anything written in Perl. As an experiment, I tried calling a C implementation of RC4 from an loop iterating in Perl, and just the costs of moving back and fourth from perl to C, meant that this would take days rather than seconds. (There is an assumption here that juster did actually solve the problem in 6-seconds which I failed to replicate!)

    • It ignores the reality that unlike (say), a Turing Machine with an infinite tape, that on real systems, resources are finite and have time-costs associated with using them.

      Memory is finite and takes time to allocate, initialise, populate and free.

      In your own assessments of the algorithms in this thread, you are ignoring the time-costs of insertion and lookup in the hashes. Presumably on the basis of the well-known "hashes are O(1)".

      But in the real world of Perl (and everywhere else also), hashes: grow geometrically; they have collisions; and they are notoriously bad, in big-o terms, when measured at the implementation level, at exploiting cache coherency.

      Hence a linear search, and algorithms (like B+ trees that exploit localised linear searching at the lower levels) can beat out hashes for some applications. In the real world.

    Big-O notation is useful for comparing specific algorithm for achieving specific, and usually very limited, tasks. The problem is that real world tasks tend to be more complex in that they need the application of several specific algorithms, serially or in parallel, to achieve the final goal. And trying to combine the big-O notations of all the sub algorithms, is a task often far more complex that writing the program and benchmrking it.

    Finally, big-O ignores the reality that in programmer terms, writing two nested loops is no harder than writing to independant loops. And that for small numbers, the difference in run time can be negligable.

    For example, the OP benchmarked my implementation (not algorithm; moritz had mentioned the possibility of it before I posted), of the vec solution against Roy Johnston's implementation, and his own, on his real-world data sets consisting of "modest sized arrays" of "non-duplicate, positive integers", and found Roy Johnstone's the best for his purposes. And that, as they say, is that.

    Out of personal interest, I added Roy's implementation to my own benchmark and found that it beat mine (in it's original, non-pre-extending form) when the number of elements in the arrays were less than 30. In the pre-extending version, this rises to 500,000. With the OP's 'modest size' obviously being less than 30, my implementation is a non starter in either form.

    And, for the OP's specific requirements, all the big-O analysis of algorithms in the world isn't going to change that. In the real world, big-O is no substitute for benchmarks of actual implementations using actual data. Which makes this whole subthread, like big-O notation, academic.

    I also completely disagree with your assessment that my implementation is "a variation of a Bloom Filter". The key element of a bloom filter is that it is a probabalistic solution subject to false positive possibilities. My implementation does not suffer this limitation, it is completely determanistic. If you want to categorise it, then it is a brute force lookup table. Nothing more.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      I am completely in line with you about the big-O thing. Of course in real world applications there are many factors to be taken into consideration than just theoretical complexity analysis.However my argument is still that these algorithms are O(m+n) and whatever optimization you perform you can't change that.

      And in fact the buk() function is a variation of the Bloom Filter. The only difference is that in your code you use one hash function which is f(x)=x. In the real Bloom Filter k hash functions are used, which makes this algorithm probabilistic.
        However my argument is still that these algorithms are O(m+n) and whatever optimization you perform you can't change that.

        And yet they perform and scale in entirely different ways, which makes the big-O assessment, regardless of it's numerical value, as useful as a chocolate teapot.

        And in fact the buk() function is a variation of the Bloom Filter. The only difference is that in your code you use one hash function which is f(x)=x. In the real Bloom Filter k hash functions are used, which makes this algorithm probabilistic.

        I'm sorry, but that is just plain wrong! Actually, I going to say that rather more assertively. That is unmitigated and unmitigable, 100% pure bovine droppings. It is the intellectual equivalent of saying:an F1 car is a variation on an airplane; it's just that it's wings are on upside down.

        The mathematicians trick of applying an identity function to a variable in order to re-categorise the situation doesn't work here, just as turning an F1 car upside down won't allow you to fly. It might allow you to drive upside down (on the roof of a tunnel given sufficient forward momentum), but it still wouldn't be flying.

        The defining characteristic of a Bloom Filter is that it is non-determanistic. If it ain't non-determanistic, it ain't a Bloom Filter!

        Bloom Filters were invented in 1970. Lookup tables have existed for hundreds, if not thousands of years. Seaman were using them at least as early as the 15th or 16th century. It might be correct (though still a stretch), to describe a Bloom Filter as a non-determanistic variation of a lookup table, but the reverse is and never will be true.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.