Just for information's sake I believe the problem is due to Benchmark trying to remove the timing for the "empty" loop from the results.
Oh, I know. I'm quite familiar with the code.
Elsewhere in this thread you said something like "I dont need something to run two loops and subtract the times for me", but thats not what Benchmark does. It also times an empty loop.
Don't quote me out of context. I was replying to BrowserUKs technique of putting the loop inside the code to benchmark. Once you put the (or a) loop into the code you benchmark, anything Benchmark.pm tries to do compensate for running an empty loop is fruitless.
So with your benchmark what is happening is you are timing two empty loops, subtracting one from the other and then seeing the consequence of noise in the calculation.
As I said elsewhere, I deliberately picked a benchmark with a tiny loop to quickly get an example with negative times. It does happen with other code as well, although far less common. And I wasn't going to spend a day constructing one. All I wanted to do was to show that the problem wasn't an issue of the past (which was the claim being made).
Note also that the benchmarks that didn't use Benchmark that I showed were pointless. I was using gettimeofday() to get a timestamp. I should of course have used times(). (Which is what Benchmark uses as well).
Here's the corrected version:
#!/usr/bin/perl use strict; use warnings; my $ITERATIONS = 10_000_000; my $RUNS = 10; my $counter1 = 0; my $counter2 = 0; my $counter3 = 0; my $counter4 = 0; foreach (1 .. $RUNS) { my ($u1, $s1) = times; for (1 .. $ITERATIONS) {++$counter1 & 1 and 1} my ($u2, $s2) = times; for (1 .. $ITERATIONS) {++$counter2 % 2 and 1} my ($u3, $s3) = times; for (1 .. $ITERATIONS) {$a = ++$counter3 & 1} my ($u4, $s4) = times; for (1 .. $ITERATIONS) {$a = ++$counter4 % 2} my ($u5, $s5) = times; my $d1 = $u2 + $s2 - $u1 - $s1; my $d2 = $u3 + $s3 - $u2 - $s2; my $d3 = $u4 + $s4 - $u3 - $s3; my $d4 = $u5 + $s5 - $u4 - $s4; printf "And: %.2f Mod: %.2f; And: %.2f Mod: %.2f\n", $d1, $d2, $ +d3, $d4; } __END__ And: 2.89 Mod: 3.25; And: 2.82 Mod: 3.05 And: 2.74 Mod: 3.21; And: 2.76 Mod: 3.05 And: 2.69 Mod: 3.16; And: 2.91 Mod: 3.04 And: 2.67 Mod: 3.15; And: 2.79 Mod: 3.21 And: 2.71 Mod: 3.15; And: 2.75 Mod: 3.04 And: 2.80 Mod: 3.16; And: 2.75 Mod: 3.04 And: 2.69 Mod: 3.16; And: 2.93 Mod: 3.08 And: 2.67 Mod: 3.15; And: 2.75 Mod: 3.19 And: 2.69 Mod: 3.17; And: 2.75 Mod: 3.03 And: 2.80 Mod: 3.18; And: 2.76 Mod: 3.05
Now, do I care whether the it also times the overhead of the loop? No. Either the overhead of the loop is significant, or not. If it's significant, it doesn't matter (for a performance point of view) which solution I pick. Even if I pick the slower one, the difference will only be noticable in a so-called "tight" loop, but then the overhead of the loop itself becomes significant. And if the loop overhead isn't significant, well, then it doesn't really matter that I add the overhead to the results, does it?
Now, if I really want to be fancy (and when I do need to benchmark something more seriously than something trivial on perlmonks), I run the benchmark 100 or 1000 times, keeping track of the results, discarding the lowest and highest 5% of the results, and averaging the rest (calculation standard deviation as well). And I do it with different datasets. All things Benchmark doesn't support anyway.
In reply to Re^8: &1 is no faster than %2 when checking for oddness. (Careful what you benchmark)
by Anonymous Monk
in thread &1 is no faster than %2 when checking for oddness. Oh well.
by diotalevi
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |