in reply to Re^4: Fast, Efficient Union and Intersection on arrays
in thread Fast, Efficient Union and Intersection on arrays

I agree with you that trying to use big-O to compare the time-costs of space complexity is pretty pointless. I was simply following moritz lead with the purpose of trying to make that very point.

I will strongly disagree with your statement about big-O notation being done only in academic papers.

That's not what I said. I said: "that big-O is rarely done properly outside of academic papers, which is quite different. Feel free to disagree with what I actually said also, but you might want to read on first.

What I said is not a million miles away from your own later statement:"Complexity theory is a real treasure when we can apply it properly.".

The problem is that people will insist on trying to use the analysis of algorithm complexity to compare the time-costs of actual implementations. It simply doesn't work for a variety of reasons.

Big-O notation is useful for comparing specific algorithm for achieving specific, and usually very limited, tasks. The problem is that real world tasks tend to be more complex in that they need the application of several specific algorithms, serially or in parallel, to achieve the final goal. And trying to combine the big-O notations of all the sub algorithms, is a task often far more complex that writing the program and benchmrking it.

Finally, big-O ignores the reality that in programmer terms, writing two nested loops is no harder than writing to independant loops. And that for small numbers, the difference in run time can be negligable.

For example, the OP benchmarked my implementation (not algorithm; moritz had mentioned the possibility of it before I posted), of the vec solution against Roy Johnston's implementation, and his own, on his real-world data sets consisting of "modest sized arrays" of "non-duplicate, positive integers", and found Roy Johnstone's the best for his purposes. And that, as they say, is that.

Out of personal interest, I added Roy's implementation to my own benchmark and found that it beat mine (in it's original, non-pre-extending form) when the number of elements in the arrays were less than 30. In the pre-extending version, this rises to 500,000. With the OP's 'modest size' obviously being less than 30, my implementation is a non starter in either form.

And, for the OP's specific requirements, all the big-O analysis of algorithms in the world isn't going to change that. In the real world, big-O is no substitute for benchmarks of actual implementations using actual data. Which makes this whole subthread, like big-O notation, academic.

I also completely disagree with your assessment that my implementation is "a variation of a Bloom Filter". The key element of a bloom filter is that it is a probabalistic solution subject to false positive possibilities. My implementation does not suffer this limitation, it is completely determanistic. If you want to categorise it, then it is a brute force lookup table. Nothing more.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."
  • Comment on Re^5: Fast, Efficient Union and Intersection on arrays

Replies are listed 'Best First'.
Re^6: Fast, Efficient Union and Intersection on arrays
by ptoulis (Scribe) on Nov 21, 2008 at 11:53 UTC
    I am completely in line with you about the big-O thing. Of course in real world applications there are many factors to be taken into consideration than just theoretical complexity analysis.However my argument is still that these algorithms are O(m+n) and whatever optimization you perform you can't change that.

    And in fact the buk() function is a variation of the Bloom Filter. The only difference is that in your code you use one hash function which is f(x)=x. In the real Bloom Filter k hash functions are used, which makes this algorithm probabilistic.
      However my argument is still that these algorithms are O(m+n) and whatever optimization you perform you can't change that.

      And yet they perform and scale in entirely different ways, which makes the big-O assessment, regardless of it's numerical value, as useful as a chocolate teapot.

      And in fact the buk() function is a variation of the Bloom Filter. The only difference is that in your code you use one hash function which is f(x)=x. In the real Bloom Filter k hash functions are used, which makes this algorithm probabilistic.

      I'm sorry, but that is just plain wrong! Actually, I going to say that rather more assertively. That is unmitigated and unmitigable, 100% pure bovine droppings. It is the intellectual equivalent of saying:an F1 car is a variation on an airplane; it's just that it's wings are on upside down.

      The mathematicians trick of applying an identity function to a variable in order to re-categorise the situation doesn't work here, just as turning an F1 car upside down won't allow you to fly. It might allow you to drive upside down (on the roof of a tunnel given sufficient forward momentum), but it still wouldn't be flying.

      The defining characteristic of a Bloom Filter is that it is non-determanistic. If it ain't non-determanistic, it ain't a Bloom Filter!

      Bloom Filters were invented in 1970. Lookup tables have existed for hundreds, if not thousands of years. Seaman were using them at least as early as the 15th or 16th century. It might be correct (though still a stretch), to describe a Bloom Filter as a non-determanistic variation of a lookup table, but the reverse is and never will be true.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.