rolandomatilla:
I've never studied big-O notation, but have heard a bit about it. Since it's intended to show how an algorithm scales, you generally only care about the dominant term (i.e., the term that grows biggest the fastest). The constant multiplier is ignored.
So the first one turns into O(log(n)), and the second one terms into O(logx(n)) where x is the smaller of a or b. (Remember log10(1024) is roughly 3 while log2(1024) is 10, so log2(n) grows faster.)
Anyway, that's my uneducated take on it. I'm sure others will pipe up and let us know what I got wrong, and add refinement where needed.
Update: I forgot to mention that the algorithm that scales better can be a worse performer until your data set is big enough, so just because one algorithm is "better" by big-O notation, doesn't mean it's the best for your application. (If the "better" one is better only when n>10,000,000 then you may find that the "inefficient" one is pretty good for your when your n is less than that, and it may be pretty hard to beat for small n.)
Update 2: Gah! I didn't read the first one correctly and didn't see n^2 as jettero points out. Obviously I botched the first one.
...roboticus
When your only tool is a hammer, all problems look like your thumb.
|