I'm aware that the theorists will categorise them as having the same order of complexity, but when additional conditional checks are required, the complexity has increased.
And at some point it is necessary to decide whether you need to find the lowest value greater or equal to the search term or the highest value less than or equal to the search term. And that adds to the (actual, real-world), complexity of the code.
I know you know this--as your many Sort::* packages assert--in Perl, it is the number of source-level operations that is most relevant to efficiency:
@a = 1 .. 1e6;
cmpthese -1, {
a=>q[ my $total = sum @a; ],
b=>q[ my $total = 0; $total += $_ for @a ],
};;
Rate b a
b 10.8/s -- -73%
a 40.9/s 277% --
Identical algorithms but a significant difference in efficiency.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|