http://qs1969.pair.com?node_id=573138

All,
If you have ever felt that people debating the big-oh notation of some algorithm sound like they are speaking a foreign language, then this tutorial is for you. You may have even decided to educate yourself by checking out the Wikipedia entry and have been convinced it was a foreign language. You are not alone.

You may have already read An informal introduction to O(N) notation by dws which is excellent. This tutorial will repeat much of the same information in a much more elementary manner as well as go into detail about how useful or not the notation is. It is likely I won't be completely accurate about a number of things in the tutorial. My hope is that by the end, your understanding of the topic will be sufficient enough that you can understand the corrections others make (as I am sure they will).

What Is The Big-O

This tutorial covers the Big-O as it relates to computer science. If you were thinking of something else (perhaps Fridays in the Chatterbox), you can stop reading now. Simply put, it describes how the algorithm scales (performs) in the worst case scenario as it is is run with more input. Since my simple explanation may not be simple enough - let me give an example. If we have a sub that searches an array item by item looking for a given element, the scenario that the Big-O describes is when the target element is last (or not present at all). This particular algorithm is O(N) so the same algorithm working on an array with 25 elements should take approximately 5 times longer than an array with 5 elements.

It is easy to lose sight of the fact that there is more to consider about an algorithm other than how fast it runs. The Big-O can also be used to describe other behavior such as memory consumption. We often optimize by trading memory for time. You may need to choose a slower algorithm because it also consumes less of a resource that you need to be frugal with.

What The Big-O Is Not

Constants: The Big-O is not concerned with factors that do not change as the input increases. Let me give an example that may be suprising. Let's say we have an algorithm that needs to compare every element in an array to every other element in the array. A simple implementation may look like:
for my $i (0 .. $#array) { for my $j (0 .. $#array) { next if $j == $i; # Compare $i, $j } }
This is O(N^2). After a little bit of testing we decide that this is far too slow, so we make a little optimization.
for my $i (0 .. $#array - 1) { for my $j ($i + 1 .. $#array) { # Compare $i, $j } }
We have just cut our run time in half - YAY! Guess what, the Big-O has stayed the same O(N^2). This is because N^2 / 2 only has one variable part. The divided by 2 remains the same (constant) regardless of the input size. There are valid mathematical reasons for doing this but it can be frustrating to see two algorithms with the exact same Big-O that results in one running twice as fast as the other.

Implementation Details: The Big-O is an uncaring cold-hearted jerk. It does not care if you can't afford to buy the extra RAM needed for your problem and have to resort to tying your hash to disk. You are on your own. It also doesn't care that the data structure you would need to implement to achieve O(Log Log N) is so complex you will never be able to maintain it. In a nutshell, the Big-O lives in the land of theory and doesn't care very much about the real world.

What The Big-O Is Good For

The good news is that the Big-O belongs to an entire family of notation. This tutorial will not cover it but family members include describing the average and best cases. It also serves as a good indicator of what algorithm to use once you take your individual circumstances into consideration. Let me give a contrived example:

Let's consider using cacheing as an optimization. In theory, the Big-O is going to ignore it saying your input is all different and you will never benefit from it. In reality, you test it and discover that you have a 60% hit rate. You do a little more experimenting and discover that the input size required for a more complex algorithm to be faster is larger than your real maximum input size. This all despite the more complex alorithm having a more favorable Big-O.

In a nutshell, the Big-O of a given algorithm combined with the specific problem knowledge is a great way to choose the best algorithm for your situation.

What Do Those Symbols Mean?

So by this point you should realize that Big-O (theory) without context (real world) is not very useful. You are now armed with the knowledge necessary to start using Big-O as the mercenary it is. Ok Big-O, what exactly do you mean that algorithm is O(N Log N)? I am going to duck at this point and suggest you read the node by dws or the Wikipedia entry I linked to earlier. You may now be wondering if Big-O is really inanimate, perhaps even an abstract concept and not at all real as I have made it out to be. If so, how then can you determine the Big-O of a given algorithm? Analysis of algorithms is not for the faint of heart, so I must once again duck.

I have not really added anything to any of the other links I referenced. I do hope however that I have put it in plain enough english to be understood by even the most extreme novice. I welcome those more knowledgeable than myself to add corrections as well as provide additional content. I would only ask that you do so in the same spirit of this tutorial (understandable by non-CS majors).

Update: Removed an incorrect analogy regarding slopes thanks to blokhead.
Also see Sorting a list of IP addresses (aka Why I hate Big O) by jeffa

Cheers - L~R