in reply to Benchmark.pm: Does subroutine testing order bias results?

I think that your right in that under some circumstances, Benchmark seems to consistantly favor the first test run. If the code under test does a lot of allocation and deallocation, as I suspect (but don't know) that List::Compare does? Then the first test seems to run substantially quicker. The subsequent tests seem to get a fairly even performance.

Here, the code tested is identical in all cases, but the first test run (Atest), consistantly comes out 23%-46% faster than the other (identical?) tests.

#! perl -slw use strict; use Benchmark qw[ cmpthese ]; sub test { my @strings = map{ ' ' x 1000 } 1 .. 50_000; } cmpthese( 5, { Atest => \&test, Btest => \&test, Ctest => \&test, Dtest => \&test, }); __END__ s/iter Ctest Btest Dtest Atest Ctest 9.29 -- -0% -0% -31% Btest 9.29 0% -- -0% -31% Dtest 9.28 0% 0% -- -31% Atest 6.36 46% 46% 46% -- P:\test>373536-2 s/iter Btest Dtest Ctest Atest Btest 9.31 -- -0% -0% -19% Dtest 9.29 0% -- 0% -19% Ctest 9.29 0% 0% -- -19% Atest 7.54 23% 23% 23% -- s/iter Ctest Btest Dtest Atest Ctest 9.29 -- -0% -0% -31% Btest 9.29 0% -- -0% -31% Dtest 9.28 0% 0% -- -31% Atest 6.36 46% 46% 46% -- P:\test>373536-2 s/iter Btest Dtest Ctest Atest Btest 9.31 -- -0% -0% -19% Dtest 9.29 0% -- 0% -19% Ctest 9.29 0% 0% -- -19% Atest 7.54 23% 23% 23% --

I have a tentative conclusion for why this might be, but the size of the difference shown by the benchmark seems too big for my thought to explain all of it. So, I'll keep my trap shut for a while and see what others think.


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon

Replies are listed 'Best First'.
Re^2: Benchmark.pm: Does subroutine testing order bias results?
by hossman (Prior) on Jul 15, 2004 at 03:58 UTC

    For what it's worth...

    Benchmark: timing 5 iterations of Atest, Btest, Ctest, Dtest... Atest: 4 wallclock secs ( 4.08 usr + 0.23 sys = 4.31 CPU) @ 1 +.16/s (n=5) Btest: 5 wallclock secs ( 4.18 usr + 0.00 sys = 4.18 CPU) @ 1 +.20/s (n=5) Ctest: 5 wallclock secs ( 4.17 usr + 0.00 sys = 4.17 CPU) @ 1 +.20/s (n=5) Dtest: 5 wallclock secs ( 4.21 usr + 0.00 sys = 4.21 CPU) @ 1 +.19/s (n=5) Rate Atest Dtest Btest Ctest Atest 1.16/s -- -2% -3% -3% Dtest 1.19/s 2% -- -1% -1% Btest 1.20/s 3% 1% -- -0% Ctest 1.20/s 3% 1% 0% --
    Update: possibly more important, the problem isn't exaserbated by increases iterations...
    laptop:~> monk.pl Benchmark: timing 20 iterations of Atest, Btest, Ctest, Dtest... Atest: 17 wallclock secs (16.61 usr + 0.25 sys = 16.86 CPU) @ 1 +.19/s (n=20) Btest: 16 wallclock secs (16.73 usr + 0.00 sys = 16.73 CPU) @ 1 +.20/s (n=20) Ctest: 17 wallclock secs (16.72 usr + 0.00 sys = 16.72 CPU) @ 1 +.20/s (n=20) Dtest: 17 wallclock secs (16.71 usr + 0.02 sys = 16.73 CPU) @ 1 +.20/s (n=20) Rate Atest Btest Dtest Ctest Atest 1.19/s -- -1% -1% -1% Btest 1.20/s 1% -- -0% -0% Dtest 1.20/s 1% 0% -- -0% Ctest 1.20/s 1% 0% 0% --
    System Info...
      For what it's worth...

      Actually, it worth quite a lot--to me at least. Thankyou greatly.

      Your post prompted me to try my benchmark with 5.6.1 and the difference is stark.

      C:\>perl5.6.1 p:\test\373536-2.pl [SNIP] Rate Btest Atest Ctest Dtest Btest 9.14/s -- -0% -3% -3% Atest 9.14/s 0% -- -3% -3% Ctest 9.40/s 3% 3% -- -0% Dtest 9.42/s 3% 3% 0% -- C:\>perl5.8.4 p:\test\373536-2.pl s/iter Btest Dtest Ctest Atest Btest 4.23 -- -0% -0% -18% Dtest 4.22 0% -- -0% -18% Ctest 4.22 0% 0% -- -18% Atest 3.45 22% 22% 22% --

      Not only is does the first-run bias disappear, but look at the iteration times! With 5.6.1, they are in iterations per second. For 5.8.4 thay are in seconds per iteration. If my math is up to it this morning, I make that 5.8.4 is 40x slower than 5.6.1!...

      That's not the end of the story. The 5.6.1 involved is home built from the CPAN distribution. The 5.8.4 involved is AS 810.

      I've royally screwed my home-build copy of 5.8.4 trying to get USE_PERL_MALLOC to work, so I haven't been able to test that yet, but I am going to re-unzip the lot and start from scratch.

      All of which is probably of not muct interest to you, but it has made me look again at various things. Again, thankyou.


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      "Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon
Re^2: Benchmark.pm: Does subroutine testing order bias results?
by Anonymous Monk on Jul 18, 2004 at 13:55 UTC
    I ran a slightly generalized version of BrowserUk's test() of Benchmark.pm's cmpthese function on 4 different combinations of Perl and OS. From the data below I note:

    1. Across Perls and OSes, increasing the number of iterations in cmpthese() smoothed out the differences in results from the 4 tests in each call. Increasing the number of elements mapped to @strings by test() -- on the 2 OSes where I was able to test that -- didn't have a consistent effect in this regard.

    2. On 3 of the 4 Perl/OS combinations on which the test was run, the first run of the test within a given iteration ran more slowly -- albeit slightly -- than subsequent runs. Only on the Perl 5.8.0/Win2K combination was the first run faster than subsequent ones. This is the opposite of what I reported in my OP to this thread, where I found that, while comparing two different versions of a module, the first subroutine tested via timethese() ran faster than the second.

    It may be that this outcome is due to the type of work which the subroutine being benchmarked is called upon to do. In BrowserUk's test(), only 1 array is created with each iteration. In my OP, I was creating the heaviest version of a List::Compare object, which internally allocates more than a dozen arrays and hashes. But since this object goes out of scope and is presumably destroyed with each call to timethese(), it's not clear why this necessarily means that the second subroutine should run more slowly than the first.

    The plot thickens.

    sub test { my @strings = map{ ' ' x 1000 } 1 .. $records; } cmpthese( $iterations, { Atest => \&test, Btest => \&test, Ctest => \&test, Dtest => \&test, } ); __END__ # 1. BrowserUk's &test on Perl 5.8.0 on Win2K # Done 6 ways: # a. Testing 5 iterations of 25000 elements ... Rate Dtest Btest Ctest Atest Dtest 1.04/s -- -0% -0% -13% Btest 1.04/s 0% -- -0% -13% Ctest 1.05/s 0% 0% -- -13% Atest 1.20/s 15% 15% 15% -- # b. Testing 5 iterations of 50000 elements ... s/iter Dtest Btest Ctest Atest Dtest 3.66 -- -0% -0% -20% Btest 3.66 0% -- -0% -20% Ctest 3.66 0% 0% -- -20% Atest 2.93 25% 25% 25% -- # c. Testing 50 iterations of 25000 elements ... Rate Ctest Dtest Btest Atest Ctest 1.03/s -- -0% -0% -2% Dtest 1.03/s 0% -- -0% -2% Btest 1.04/s 0% 0% -- -2% Atest 1.06/s 2% 2% 2% -- # d. Testing 50 iterations of 50000 elements ... s/iter Dtest Ctest Btest Atest Dtest 3.19 -- -0% -0% -2% Ctest 3.18 0% -- -0% -2% Btest 3.18 0% 0% -- -2% Atest 3.12 2% 2% 2% -- # e. Testing 100 iterations of 25000 elements ... Rate Btest Atest Ctest Dtest Btest 1.03/s -- -0% -1% -1% Atest 1.04/s 0% -- -0% -1% Ctest 1.04/s 1% 0% -- -0% Dtest 1.04/s 1% 1% 0% -- # f. Testing 100 iterations of 50000 elements ... s/iter Dtest Ctest Btest Atest Dtest 5.71 -- -0% -0% -1% Ctest 5.71 0% -- -0% -1% Btest 5.69 0% 0% -- -1% Atest 5.65 1% 1% 1% -- # 2. BrowserUk's &test on Perl 5.8.4 on Darwin # Done 6 ways: # a. Testing 5 iterations of 25000 elements ... Rate Atest Dtest Btest Ctest Atest 2.49/s -- -4% -5% -6% Dtest 2.59/s 4% -- -1% -2% Btest 2.62/s 5% 1% -- -1% Ctest 2.65/s 6% 2% 1% -- # b. Testing 5 iterations of 50000 elements ... Rate Atest Btest Ctest Dtest Atest 1.22/s -- -6% -6% -7% Btest 1.29/s 6% -- -0% -1% Ctest 1.29/s 6% -0% -- -1% Dtest 1.30/s 7% 1% 1% -- # c. Testing 50 iterations of 25000 elements ... Rate Atest Dtest Btest Ctest Atest 2.59/s -- -1% -1% -1% Dtest 2.60/s 1% -- -0% -0% Btest 2.60/s 1% 0% -- -0% Ctest 2.61/s 1% 0% 0% -- # d. Testing 50 iterations of 50000 elements ... Rate Atest Btest Ctest Dtest Atest 1.28/s -- -0% -1% -1% Btest 1.29/s 0% -- -0% -0% Ctest 1.29/s 1% 0% -- -0% Dtest 1.29/s 1% 0% 0% -- # e. Testing 100 iterations of 25000 elements ... Rate Ctest Dtest Btest Atest Ctest 2.60/s -- -0% -0% -2% Dtest 2.60/s 0% -- -0% -2% Btest 2.61/s 0% 0% -- -2% Atest 2.65/s 2% 2% 2% -- # f. Testing 100 iterations of 50000 elements ... Rate Atest Btest Dtest Ctest Atest 1.29/s -- -0% -0% -0% Btest 1.29/s 0% -- -0% -0% Dtest 1.29/s 0% 0% -- -0% Ctest 1.29/s 0% 0% 0% -- # 3. BrowserUk's &test on Perl 5.6.0 on RedHat Linux 7.2 # Done 3 ways; was unable to do 50000 elements test due to # excessive swapping to disk # a. Testing 5 iterations of 25000 elements ... Rate Atest Btest Ctest Dtest Atest 1.80/s -- -4% -4% -4% Btest 1.87/s 4% -- -0% -0% Ctest 1.87/s 4% 0% -- 0% Dtest 1.87/s 4% 0% 0% -- # b. Testing 50 iterations of 25000 elements ... Rate Atest Btest Ctest Dtest Atest 1.86/s -- -0% -1% -1% Btest 1.87/s 0% -- -0% -0% Ctest 1.87/s 1% 0% -- 0% Dtest 1.87/s 1% 0% 0% -- # c. Testing 100 iterations of 25000 elements ... Rate Atest Dtest Ctest Btest Atest 1.86/s -- -0% -0% -0% Dtest 1.87/s 0% -- -0% -0% Ctest 1.87/s 0% 0% -- -0% Btest 1.87/s 0% 0% 0% -- # 4. BrowserUk's &test on Perl 5.6.1 on Windows98SE # Done 3 ways; was unable to do 50000 elements test due to # excessive swapping to disk # a. Testing 5 iterations of 25000 elements ... Rate Atest Dtest Btest Ctest Atest 2.02/s -- -9% -9% -11% Dtest 2.22/s 10% -- -0% -2% Btest 2.22/s 10% 0% -- -2% Ctest 2.27/s 12% 2% 2% -- # c. Testing 50 iterations of 25000 elements ... Rate Atest Dtest Ctest Btest Atest 2.20/s -- -1% -1% -1% Dtest 2.22/s 1% -- -0% -0% Ctest 2.22/s 1% 0% -- 0% Btest 2.22/s 1% 0% 0% -- # e. Testing 100 iterations of 25000 elements ... Rate Atest Dtest Ctest Btest Atest 2.19/s -- -1% -1% -2% Dtest 2.21/s 1% -- -0% -0% Ctest 2.21/s 1% 0% -- -0% Btest 2.22/s 2% 0% 0% --
    System Info

Re^2: Benchmark.pm: Does subroutine testing order bias results?
by jkeenan1 (Deacon) on Jul 18, 2004 at 14:10 UTC
    The previous posting (beginning, "I ran a slightly more generalized ...") was mine. I forgot to log in.

      I'll reply here so you get the notification.

      The problem is not due to Benchmark.

      I speculated that the first-run bias for low numbers of benchmark iterations could be because on the very first iteration when the storage required by the benchmark is first allocated, it is 'virgin' memory.

      This is bit like having a empty disk drive; When you write the first few files to it, each one get continuous space that directly follows the last. No freespace chains need to be traversed. There is always enough space at the head of the free-space chain to allocate the next file to be written, because the head of the chain is the rest of the disk.

      On the second and subsequent runs, the space freed by the first run is now a chain of blocks of sizes that may need to be coallesced to fulfill any given request. The memory becomes fragmented much like disk drives do.

      Tye pointed out {placeholder for the link} that the MS C runtime malloc() imlementation is, um, sub-optimal for the way Perl uses memory. He suggested that I tried building Perl to use PERL_MALLOC, which is tailored to Perl's requirements. (Which AS builds do not use; maybe for good reason.)

      I attempted this and discovered that the Makefile will only allow you to use PERL_MALLOC if you disable USE_IMP_SYS which (though not stated in the Makefile hints), also precludes using USE_ITHREADS & USE_MULTIPLICITY.

      It turns out that Steve Hays was has persued a similar strategy and has posted a patch at perl-win32-porters that bypasses a problem in the Win32 sbrk() implementation and allows Perl to build with the combination of PERL_MALLOC, USE_IMP_SYS, USE_ITHREADS, USE_MULTIPLICITY.

      I also came up with a workaround, but Steve's is better than mine...and he is set up to produce patches properly whereas I am not.

      I've tried thrashing the patch applied to 5.8.4, with a test program -- basically running my Benchmark above on 10, 20 and 30 threads simultaneously and (so far) it appears to be stable. THIS IS NOT OFFICIAL. Just my findings on a single, 1-cpu box. It may not be compatible with multi-cpu boxes. It may be that this is not a good test.

      Steve also posted a [news://nntp.perl.org/perl.perl5.porters/93142|Smoke report] of the patch that show it failing with 'inconsistant' and 'dubious' result from a threads test in smoke test suite. Whether these are related to the patch or not, is not yet clear to me.

      The results I am seeing from running the Benchmark using PERL_MALLOC show not only a marked improvement in the consistency of the first to last runs. Now biased against the first run rather than for it; but only very slightly. It easily gets lost in higher numbers of iterations. It also performs *much* more quickly than the CRT malloc; around 5-7x faster. Not quite back to 5.6.1 performance, but a very definite improvement. The benchmark was constructed to highlight and exaserpate the bias and may be atypical of perl usage, but maybe not for what you are doing.

      Whether the PERL_MALLOC/USE_IMP_SYS/USE_ITHREADS combination is truely safe yet is still not clear--but progress has been made.

      If you can't or don't want to risk the transistion to building with the patch yet, but need to exclude the vargaries of this first run bias, the simplest expedient is to run the cmpthese/timethese twice in each run. The first time for a single iteration which will get the memory allocated and somewhat fragmented and discard those results. Then run the benchmark again (within the same program) for a largish number of iterations and use those figures.

      FYI: PERL_MALLOC in the Makefile shows up as -Dusemymaloc in the Smoke reports and perl -V banners. You may already know this, but it confused me for a while. But then, I'm easily confused.


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      "Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon
        Woah! This is far deeper into the Perl internals than I have ever ventured. When I recently installed 5.8.4 on Darwin, it was the first time I had ever built from source. So I think I'm quite a ways away from exploring these issues.

        For the purpose of solving the problem I faced when I initiated this thread -- determining if an upgrade to one of my modules improves its performance in toto -- I think I'll KISS and use something like the script I posted in response to simonm above.

        But could you post some code that illustrates your approach of running cmpthese() or timethese() twice in a run, the first time for getting memory allocated and the second time for results? Thank you very much.