n the only change I make is to switch the names of a_1 and a_2 so that they run in the opposite order.
One possibility: if the benchmarked subs/tests cause a fair amount of memory to be allocated; then when the first sub/test runs, it pays the penalty not only of perl allocating that memory from the heap; but also of perl requesting that memory from the OS. However, when the second subroutine/test runs, the memory used by the first sub has been returned to the heap, but not to the OS, so the second sub/test runs more quickly because no (further) requests to the OS for memory are required.
Mitigation: Add another subroutine, named to be lexically earlier than the others, that simply allocates a large(r) amount of memory, in small chunks. Eg.
aaaaaaaaaa => q[ my @a; $a[ $_ ] = [ 1 .. 10 ] for 1 .. 1e6; ],
If you choose the constants in that correctly, this forces the heap to be expanded, in the right way, such that neither of your real tests will require perl to request more memory from the OS; and thus the benchmarking is more accurate.
Note: That is just one of the possible causes, there are several others. If you posted particular examples of the code being tested, you might get more relevant possibilities and mitigations.
In reply to Re: Inconsistent Results with Benchmark
by BrowserUk
in thread Inconsistent Results with Benchmark
by benwills
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |