in reply to Re^4: &1 is no faster than %2 when checking for oddness. (Careful what you benchmark)
in thread &1 is no faster than %2 when checking for oddness. Oh well.

You're right. I never use gettimeofday, and mistakenly assumed it returned millseconds not microseconds.

Now, beside being wrong, you are also inconsistent. In your first point, you accuse me of not dividing by the number of iterations, as if the value wouldn't be a number of sub-second units. Then in your second point you do think it's a number of sub-second units. You can't have it both ways.

I misunderstood your arcane way of doing floating point math and thought you were dividing by the (wrong) number of iterations. I misundertsood your code, for which I apologise, but there is no inconsistancy. Good enough reason to use a module rather than hand code the math.

I also noted "Without having analysed it too closely, you appear to have a precedence problem in your delta calculations.".

That said, even if we return to using your arcane floating point math and bizarre failure to break down the results on a per iteration basis, the results of your benchmark method, modified by the removal of your $a fopar, still shows that & 1 is consistantly faster than % 2.

#! perl -slw use strict; use Time::HiRes 'gettimeofday'; our $ITERS ||= 10_000_000; my $counter1 = 0; my $counter2 = 0; my( $s1, $m1 ) = gettimeofday; for (1 .. $ITERS) { ++$counter1 & 1 and 1 } my( $s2, $m2 ) = gettimeofday; for (1 .. $ITERS) { ++$counter2 % 2 and 1 } my( $s3, $m3 ) = gettimeofday; my $d1 = $s2 - $s1 + ($m2 - $m1) / 1_000_000; my $d2 = $s3 - $s2 + ($m3 - $m2) / 1_000_000; printf "And: %.9f Mod: %.9f\n", $d1, $d2; __END__ c:\test>junk2 -ITERS=1e6 And: 0.258308000 Mod: 0.304192000 c:\test>junk2 -ITERS=1e6 And: 0.258755000 Mod: 0.272495000 c:\test>junk2 -ITERS=1e6 And: 0.259628000 Mod: 0.302872000 c:\test>junk2 -ITERS=10e6 And: 2.656250000 Mod: 2.828125000 c:\test>junk2 -ITERS=10e6 And: 2.671875000 Mod: 2.859375000 c:\test>junk2 -ITERS=10e6 And: 2.671875000 Mod: 2.859375000 c:\test>junk2 -ITERS=100e6 And: 26.187500000 Mod: 28.375000000 c:\test>junk2 -ITERS=100e6 And: 26.265625000 Mod: 28.328125000

And the more iterations you run (therebye reducing the obscuring affects of the invariant parts of the code under test), the more consistant it becomes.

Just as was shown by both my Benchmark method and lidden's external timer method.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^6: &1 is no faster than %2 when checking for oddness. (Careful what you benchmark)
by Anonymous Monk on Nov 16, 2006 at 15:00 UTC
    I misunderstood your arcane way of doing floating point math
    Arcane? I've two time points, both given as a number of seconds and a number of microseconds since some point in time. Let the first timestamp be (S1, M1), the second (S2, M2). So, the difference is (S2 + M2 / 1000000) - (S1 + M1 / 1000000). Factoring out common code is good is always thought here, and it has the additional benefits of reducing the number of divisions needed, so using some primary school arithmetic, we get S2 - S1 + (M2 - M1) / 1000000.

    Not arcane. Elementary.

    I also noted "Without having analysed it too closely, you appear to have a precedence problem in your delta calculations.".
    If you are going to critic my posting publicly, I think I deserve the curtesy of you at least analysing some simple arithmetic a bit more closely.

      What makes it arcane is that if you use the scalar form of gettimeofday rather than the list form, no division is required. Just a floating point substraction.

      Perhaps when the underlying POSIX call was defined it was necessary to avoiding floating point math in order to achieve microsecond accuracy because maybe double precision floating point wasn't commonly available--though I'd question whether the timers of that era were good enough to render precision to a single microsecond accuracy?

      Suffice it to say, it's been a while since computers have not been able to do a floating point subtraction to 6-decimal digits of precision accurately.

      I did apologise for having misunderstood your math, and I'll do so again. I was expecting to see your code divide the iterations by the iteration count--beacuse it allows direct comparision of the affect of different numbers of iterations. I saw the discrepancy between the iterations and divisor and suspected a problem. Looked at the precedence and saw a further discrepancy to the math I thought you should be doing. I was wrong and I again apologise for that.

      Once I realised that you were not trying to calculate a per iteration value, the purpose of the math became clear--but I didn't arrive at that realisation until I saw your second post.

      If you are going to critic my posting publicly, I think I deserve the curtesy of you at least analysing some simple arithmetic a bit more closely.

      Forget not that I was responding to your (incorrect) critique of my post. Incorrect because in the end, your benchmark method shows the same results as mine.

      Also, it would be hard for me to respond to your post privately, with you being anonymous an'all.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        The critique was correct. In the end the benchmark might be the same but the process is what's important. Just because you had offsetting errors or that your omission didn't effect the benchmark doesn't mean you didn't make an omission.