One way of demonstrating cause of the apparent inefficiency of the $#... compared to scalar @... is to reverse the premise. What if what you wanted was the last index and not the size? Then you could use the same 3 methods to arrive at your solution.
use strict; use Benchmark qw[cmpthese]; my @array = (1..100); cmpthese(1000000, { 'scalar' => \&last_index_scalar, 'index' => \&last_index, 'context'=> \&last_index_context }); sub last_index_context { my $val1 = @array-1; } sub last_index { my $val2 = $#array; } sub last_index_scalar { my $val3 = scalar(@array)-1; } __DATA__ P:\test>test Rate scalar context index scalar 207641/s -- -4% -17% context 217014/s 5% -- -14% index 251509/s 21% 16% --
Now it becomes fairly obvious that it isn't that $# is "so slow", but that the process of doing an extra addition (or subtraction) in a very tight loop of an otherwise very fast operation, is enough to completely distort the results. It also demonstrates that knowing which is the "right way" has benefits beside clarity.
This single micro-optimisation isn't going to make a huge difference in 98% of programs. However, combining the effects of this one, with half a dozen others--passing references to arrays, rather than the arrays themselves; using hash for lookups rather than grepping an array; avoiding excessive backtracking in your regexes; moving as much of the processing into the built-ins rather using explicit loops; and a whole host of other, (individually micro-), optimisations--spread liberally through an application that needs to process large volumes of data very fast in tight loops, and the effect can become significant.
I'm talking about the combined effect of the savings made by simply using the correct technique or construct in the right place, rather than resorting to obscure or "tricky" optimisations
Overall, it can mean the difference between being able to use perl for that application, rather than being forced to move to something icky like C just for the sake of simply not knowing.
BTW. I'm mightily impressed with your allocation of a 10 million element array, and your patience while it was allocated :). Allocating and initialising approx. 65+ MBs of data takes a while on my old machine, but in truth, I doubt it made a jot of difference to the outcome of your benchmark over using a 100 element, or even a 2-element array, as I beleive, though I haven't verified, that the size of the array (and by implcation the last index) is retrieved from storage rather than scanned for or calculated.
One possible explaination for why $#... is slightly slower than scalar @.... (implicit or explicit), is that there probably has to be additonal(sic) internal calculation to arrive at $#... Probably 2 actually. One to adjust the stored size by -1 to allow for the zeroth element, and the second to add any setting of the deprecated $[ for legacy reasons. The fact that internally, perl has to subtract 1 and then add $[ (or even test it for non-zero, though that probably would be a wasted optimisation!), and then you have to add on back, Means overall 3 extra instruction at least by using the 'wrong' method.
In reply to Re: Timing of Array-Size Determination Methods
by BrowserUk
in thread Timing of Array-Size Determination Methods
by Itatsumaki
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |