in reply to Re^5: &1 is no faster than %2 when checking for oddness. (Careful what you benchmark)
in thread &1 is no faster than %2 when checking for oddness. Oh well.

I misunderstood your arcane way of doing floating point math
Arcane? I've two time points, both given as a number of seconds and a number of microseconds since some point in time. Let the first timestamp be (S1, M1), the second (S2, M2). So, the difference is (S2 + M2 / 1000000) - (S1 + M1 / 1000000). Factoring out common code is good is always thought here, and it has the additional benefits of reducing the number of divisions needed, so using some primary school arithmetic, we get S2 - S1 + (M2 - M1) / 1000000.

Not arcane. Elementary.

I also noted "Without having analysed it too closely, you appear to have a precedence problem in your delta calculations.".
If you are going to critic my posting publicly, I think I deserve the curtesy of you at least analysing some simple arithmetic a bit more closely.
  • Comment on Re^6: &1 is no faster than %2 when checking for oddness. (Careful what you benchmark)

Replies are listed 'Best First'.
Re^7: &1 is no faster than %2 when checking for oddness. (Careful what you benchmark)
by BrowserUk (Patriarch) on Nov 16, 2006 at 15:32 UTC

    What makes it arcane is that if you use the scalar form of gettimeofday rather than the list form, no division is required. Just a floating point substraction.

    Perhaps when the underlying POSIX call was defined it was necessary to avoiding floating point math in order to achieve microsecond accuracy because maybe double precision floating point wasn't commonly available--though I'd question whether the timers of that era were good enough to render precision to a single microsecond accuracy?

    Suffice it to say, it's been a while since computers have not been able to do a floating point subtraction to 6-decimal digits of precision accurately.

    I did apologise for having misunderstood your math, and I'll do so again. I was expecting to see your code divide the iterations by the iteration count--beacuse it allows direct comparision of the affect of different numbers of iterations. I saw the discrepancy between the iterations and divisor and suspected a problem. Looked at the precedence and saw a further discrepancy to the math I thought you should be doing. I was wrong and I again apologise for that.

    Once I realised that you were not trying to calculate a per iteration value, the purpose of the math became clear--but I didn't arrive at that realisation until I saw your second post.

    If you are going to critic my posting publicly, I think I deserve the curtesy of you at least analysing some simple arithmetic a bit more closely.

    Forget not that I was responding to your (incorrect) critique of my post. Incorrect because in the end, your benchmark method shows the same results as mine.

    Also, it would be hard for me to respond to your post privately, with you being anonymous an'all.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      The critique was correct. In the end the benchmark might be the same but the process is what's important. Just because you had offsetting errors or that your omission didn't effect the benchmark doesn't mean you didn't make an omission.