in reply to Re^3: Getting different results with $var++ and $var += 1
in thread Getting different results with $var++ and $var += 1

It really makes me wonder why we persist in using binary floating point for decimal arithmetic !
Well, it really make me wonder why programming languages (and OSses) have no problem using layers up layers, yet still let their basic datatypes be determined by the hardware the machine is running on.

Take Perl for instance. It provides (almost unlimited sized) strings as a basic data type, despite strings not being native to the hardware, or even a basic type in C. It doesn't force the programmer to cast numerical values between integers, longs, floats or doubles. It prides itself it takes care of gritty details and doesn't bother the programmer with it.

If if a programmer is surprised that '0.84 - 0.34 == 0.5' isn't true, we scold at him, for being an ignorant person, not knowing the internal hardware representation of the data.

IMO, that sucks. If I wanted to program in such a way that I have to consider the internal hardware representation, I can code in C. I wish Perl had arbitrary precision integers, and could add/subtract/compare decimal numbers without losing precision.

  • Comment on Re^4: Getting different results with $var++ and $var += 1

Replies are listed 'Best First'.
Re^5: Getting different results with $var++ and $var += 1
by gone2015 (Deacon) on Dec 04, 2008 at 09:23 UTC
    I wish Perl had arbitrary precision integers, and could add/subtract/compare decimal numbers without losing precision.

    Decimal floating point is certainly doable... I did it years ago on a Z80. It's not what you'd call cheap, run-time-wise -- though with a 64 bit processor and fast multiply/divide instructions, it's probably a whole lot less painful than it used to be.

    If there isn't one already, I'd be amused to do a "DecFlt" equivalent of "BigInt". The first decision is whether to do that as big precision floating point, or indefinite precision fixed point.

    From a scientific programming perspective, binary floating point is better than any other radix. IBM machines used to use radix 16, to save time and hardware during normalisation and alignment shifts. Unfortunately, that saving came at the cost of the wonderfully named "wobbling precision". With 24 bits of precision, radix 16 floating point results are good to +/- any one of 2^-24, 2^-23, 2^-22 or 2^-21, depending on the value of the leading base 16 digit !

      Shouldn't bigrat do most of the "do math without losing precision" thing?

        Yes you could do everything as a rational.

        Thing about fractions is that you end up doing quite a lot of work finding hcf of the denominators.

        An indefinite precision decimal fixed point arithmetic would, essentially, be an "bigrat", but all denominators would be powers of 10. That would mean, of course, that 1/3 would have a "representation" error -- but we are at least used to that.

        The advantage of a fixed (but large) precision decimal floating point would be that functions like exp() and log() etc. (which do appear in financial calculations) would be more straightforward... as would any arithmetic involving widely different scales of numbers. You have to understand the effect of rounding...

        Now that I've looked, I see that there is Math::BigFloat which is decimal (though that small fact doesn't exactly leap out at you). I see it is implemented using BigInt for the significands, and allows quite fine control over the precision. So I'll go away now.