Others have mentioned that the sort is the dominant component to your benchmark, which is true. Now back to the discussion of passing references to, versus the entire flattened list. Here's what's going on behind the scenes:
When you pass a list, each element of that list is sent. The larger the list, the more elements must be passed in. People who put a pencil to it and figure out how the amount of work being done grows as the data-set grows will tell you that passing a list as a parameter is O(n) (big-oh(n)), meaning that as the dataset grows, there is a 1:1 correlation in the amount of computational work that must be done to move the data around.
When you pass a reference to a list, you are passing but one item; the reference. As your dataset, to which the reference refers, grows, there is zero growth in the amount of work being done to pass the reference around. Those pencil-pushing bean-counters will call that O(1) (big-oh(1)). What that says, in terms that the rest of us understand, is that as the dataset grows, there is no growth in the amount of work being done to pass a reference to it.
While these are theoretical concepts, big-oh notation is useful. Unless there is a serious flaw in a O(1) algorithm, it's always going to have a rate of growth of 'no significant growth' in computational time as the dataset grows, whereas O(n) will always have an approximate 1:1 ratio in growth of computational time needed as the dataset grows. That's pretty much all we have to know; proving it through benchmarks will only echo what is already pretty much indisputable. The benchmarks will just serve to quantify what 1:1 turns out to mean in actual time.
Dave
In reply to Re: Show that using references is faster
by davido
in thread Show that using references is faster
by Keystroke
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |