in reply to Decimal precision issue: Windows vs. Unix

You're worried that your floating point value (an approximation to begin with) is showing a rounding error equivalent to one meter out of the distance to the nearest star (except for our sun)?

Why?

  • Comment on Re: Decimal precision issue: Windows vs. Unix

Replies are listed 'Best First'.
Re^2: Decimal precision issue: Windows vs. Unix
by whakka (Hermit) on Jan 09, 2009 at 21:42 UTC
    They need to be exact because they're being used for look-up values. But when you put it like that I should be rounding in the first place. Think of my question as academic :)

    Edit: Okay you answered it in your post - they're approximations. To be more specific, I guess this behavior is expected - is it operating system dependent or processor or what?

      This statement:
      They need to be exact because they're being used for look-up values.
      and this one:
      I should have mentioned it's a lookup value in a database table.

      suggest to me that there's some sort of cognitive mismatch between what your code plus database are supposed to accomplish, and what you are actually trying to implement. If the various observations here about the inherent inexactness of floating point values don't solve your database lookup problem, you may need to start another thread about what the real problem is (trying to do database lookups on the basis of computed values or something like that).

      If it's a lookup value, I assume it's a constant, therefore, why calculate it at runtime at all?
        I should have mentioned it's a lookup value in a database table.
      If you're interested, check my scratchpad for some code to test accuracy based on algorithms from an astronomy book.