Does the order in which Benchmark.pm tests various subroutines bias the results which Benchmark reports?

This is the inference I am drawing from repeated tests using Benchmark, and I would like to know if other users have experienced the same phenomenon.

The specific situation: I am preparing an update of my CPAN module List::Compare. I have been tweaking its internals in the hope of getting a speed boost, and would like to know for certain whether the *cumulative* result of these tweaks is a speed up of the operation of the module *as a whole*.

To test this with Benchmark, I did the following:
1. Renamed the new version of the module 'Mist::Compare'.
2. Wrote subroutines which created List::Compare and Mist::Compare objects, respectively, and called two typical (intersection) methods on each. In order to give these tests a good workout, I passed each constructor references to three lists of 30000, 27500 and 7500 items, respectively, with enough overlap to guarantee that there was a nonzero intersection.

sub listc { my $lcm = List::Compare->new( $listrefs[0], $listrefs[1], $listrefs[2]); my @int = $lcm->get_intersection(); my $intref = $lcm->get_intersection_ref(); } sub mistc { my $lcm = Mist::Compare->new( $listrefs[0], $listrefs[1], $listrefs[2]); my @int = $lcm->get_intersection(); my $intref = $lcm->get_intersection_ref(); }

3. Benchmarked these two subroutines with varying numbers of iterations, with the following results. (For simplicity, I'm only going to show the most critical measurement: the 'usr' time.)

Benchmark: timing 10 iterations of listc, mistc... listc: 91.58 usr mistc: 100.13 usr Benchmark: timing 50 iterations of listc, mistc... listc: 506.71 usr mistc: 524.29 usr Benchmark: timing 100 iterations of listc, mistc... listc: 727.00 usr mistc: 750.56 usr Benchmark: timing 100 iterations of listc, mistc... listc: 731.85 usr mistc: 751.10 usr Benchmark: timing 100 iterations of listc, mistc... listc: 731.89 usr mistc: 753.57 usr

Note that in each case the older -- and presumably slower -- module outperformed the newer, revised module. This ran contrary to my expectations, as each modification I tried out in the newer version had itself been benchmarked and only included in the newer version if it clearly proved to be faster.

I started to wonder: What would happen if I simply reversed the order in which Benchmark tested the two modules? To do this, I simply aliased mistc() to a new subroutine with a name lower than 'listc' in ASCII order:

*amistc = \&mistc; Benchmark: timing 10 iterations of amistc, listc... amistc: 90.80 usr listc: 102.63 usr Benchmark: timing 50 iterations of amistc, listc... amistc: 508.31 usr listc: 405.34 usr Benchmark: timing 100 iterations of amistc, listc... amistc: 727.48 usr listc: 748.60 usr Benchmark: timing 100 iterations of amistc, listc... amistc: 737.53 usr listc: 765.64 usr Benchmark: timing 100 iterations of amistc, listc... amistc: 734.79 usr listc: 754.06 usr

Note that, with one exception (the second case above), the first subroutine to be tested ran faster than the second -- even though in this case the first subroutine was *exactly the same* as the second, slower running subroutine in the first case above.

It almost seems as if Benchmark -- or Perl -- is getting tired when the subroutine it is testing involves a fair amount of computation. But, in any event, on the basis of this admittedly small sample I would seriously doubt whether Benchmark is capable of telling me accurately whether the older or newer version of my module is faster.

I googled the archives at comp.lang.perl.modules on this, but couldn't come up with anything. I then supersearched the perlmonks archives; other peculiarities of Benchmark have been reported, but I couldn't find anything on this problem.

Which leads to these questions:

1. Have other users experienced similar problems?
2. Does anyone have an explanation for the subroutine being tested second to be, in 9 out of 10 cases, the slower running one?
3. Does anyone have a better way of benchmarking subroutines that entail a fair amount of calculation?

Thank you very much.
Jim Keenan


In reply to Benchmark.pm: Does subroutine testing order bias results? by jkeenan1

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.