in reply to Re: Decimal precision issue: Windows vs. Unix
in thread Decimal precision issue: Windows vs. Unix

They need to be exact because they're being used for look-up values. But when you put it like that I should be rounding in the first place. Think of my question as academic :)

Edit: Okay you answered it in your post - they're approximations. To be more specific, I guess this behavior is expected - is it operating system dependent or processor or what?

  • Comment on Re^2: Decimal precision issue: Windows vs. Unix

Replies are listed 'Best First'.
Re^3: Decimal precision issue: Windows vs. Unix
by graff (Chancellor) on Jan 10, 2009 at 02:07 UTC
    This statement:
    They need to be exact because they're being used for look-up values.
    and this one:
    I should have mentioned it's a lookup value in a database table.

    suggest to me that there's some sort of cognitive mismatch between what your code plus database are supposed to accomplish, and what you are actually trying to implement. If the various observations here about the inherent inexactness of floating point values don't solve your database lookup problem, you may need to start another thread about what the real problem is (trying to do database lookups on the basis of computed values or something like that).

Re^3: Decimal precision issue: Windows vs. Unix
by mikelieman (Friar) on Jan 09, 2009 at 22:39 UTC
    If it's a lookup value, I assume it's a constant, therefore, why calculate it at runtime at all?
      I should have mentioned it's a lookup value in a database table.
Re^3: Decimal precision issue: Windows vs. Unix
by dwhite20899 (Friar) on Jan 10, 2009 at 23:44 UTC
    If you're interested, check my scratchpad for some code to test accuracy based on algorithms from an astronomy book.