http://qs1969.pair.com?node_id=573208


in reply to Big-O Notation - What is it good for?

I think this is a very nice introduction to the topic. But I'd like to clear up a few minor nits which I frequently see, which in the grand scheme of things are not too important. If I'm lucky, I'll be able to clear them up while still keeping things down-to-earth (Update: you were probably wise to gloss over these details, since it took me this long to address them)! You've said:
... it [big-O] describes how the algorithm scales (performs) in the worst case scenario as it is is run with more input ...

... Big-O belongs to an entire family of notation ... family members include describing the average and best cases ...

There is a common misconception that Big-O means worst-case, Big-Omega means best-case, Big-Theta means average-case, etc. Well, kinda. In many common cases, that is often what is meant. But this can be an imprecise mental model. Let me propose a different interpretation.

For now let's ignore the distinction between best-case, worst-case, average-case running times. So imagine an algorithm that takes exactly the same amount of time on all inputs of the same size (best-case, worst-case and average-case all the same). Then:

(I'm intentionally omitting Big-Theta here to keep things non-technical)

So why is Big-O commonly associated with worst-case running times, and why is that imprecise? It's because when considering the worst possible case, it is natural to give a limit on how bad that worst case can be, not how good it can me. That is, we want to give an upper bound on its degree of badness. Similarly, we often want to give a lower bound on how good the best-case is (i.e, even on good inputs, there's still a limit on how fast the algorithm can go; what is that limit?), so Big-Omega gets associated with best-case.

That's why Big-O gets associated with worst-case running times, and Big-Omega with best-case. And it's true that if someone just says "the running time" is O(n^2), then n^2 is indeed "closer" to the worst-case running time than to the best-case running time, in the sense that n^2 is "bigger" than all possible running times, and the worst-case running time is "bigger" than the best-case running time. But O(n^2) doesn't mean that the worst-case running time actually is n^2, just that it is at most n^2.

And it's not wrong or illegal for me to express (for instance) a lower bound on the worst-case running time (i.e, "the worst-case is at least this bad"). In fact, this kind of bound can be quite useful. Big-O and friends are just expressing bounds/limits on a function. I can give both upper bounds and lower bounds for any function, whether that function expresses the worst-case, best-case, or average-case running time of an algorithm.

My second point, which I already hinted at, deals with how close the upper/lower bounds are to the actual running time. Remember, Big-O expresses an upper limit over your running time, but it doesn't need to be close to the running time! Here's an example. Again, for simplicity, let's just assume that average-case = best-case = worst-case running times.

I have two algorithms. My local CS major tells me that algorithm A's runtime is O(n^3), but algorithm B's runtime is O(n^2). Which is better?

Well, all I know is that algorithm A takes no more than n^3 steps, and algorithm B takes no more than n^2 steps. It may be the case that algorithm A takes much fewer than n^3 steps. Maybe it was a difficult algorithm to analyze, and it actually takes exactly n steps, but our CS major could only prove that it took less than n^3. The upper bound of O(n^3) is still absolutely correct. And it may be the case that algorithm B takes exactly n^2 steps on its inputs. It was an easy algorithm to analyze and the CS major could exactly count the number of steps. The upper bound here of O(n^2) is still correct.

In this example, algorithm B has the more "attractive" big-O, but algorithm A is always faster. (Update: fixed this sentence according to jpearl's comment)

So while Big-O gives you an upper bound, it does not tell you how good that bound is. It may be a very very rough/sloppy over-estimate. So it's nice to be able to have corresponding lower bounds as well. If the upper and lower bounds for some running time are close, you know that the actual running time must live in the small space between these two bounds. This is why I mentioned that asking for a lower bound on the worst-case running time could be a useful thing. This is why we get things like Big-Omega and Big-Theta, but for that you'll have to look elsewhere. This reply is already way too long ;)

blokhead

Replies are listed 'Best First'.
Re^2: Big-O Notation - What is it good for?
by jpearl (Scribe) on Mar 13, 2009 at 23:44 UTC
    Hrm, in your last bolded sentence, do you have A and B switched? Very good intro though :-)
      Thank you. I didn't even have that mental model. Having it really clears things up. Great work!
Re^2: Big-O Notation - What is it good for?
by Anonymous Monk on Jul 10, 2008 at 14:34 UTC
    Coming from a background "unbounded" by CS, and more often approaching a mathematical context, the tutorial explanation, and even moreso the comment aftewards, are amazingly AWESOME! I'm usually left stuck between programmers who dislike math, and mathematicians who just don't get computers, and getting any kind of consensus and mutually beneficial discussion is impossible... A big thank-you to both authors for your time and effort, and if anyone is aware of additional resources for such topics (CS/math and math/CS crossovers, or explanations of theories behind practical applications and, of particular interest to me, practical applications of theories), PLEase share!