in reply to Re: Getting different results with $var++ and $var += 1
in thread Getting different results with $var++ and $var += 1

Yeah, it probably was my decision actually. Feel free to point and laugh.

It's listed as type decimal (10,2), so not a float. Sorry about that.

I'm interested in why integral types are preferred for currency. I know it's off topic, but I'd be interested in learning more about why you have this preference (or really, why I should have this preference).

  • Comment on Re^2: Getting different results with $var++ and $var += 1

Replies are listed 'Best First'.
Re^3: Getting different results with $var++ and $var += 1
by gone2015 (Deacon) on Dec 03, 2008 at 23:38 UTC

    Ah. The old decimal fraction in binary floating point malarkey.

    The reason for prefering to hold currency as, say, pennies or cents is that it ensures that the value is exact. If you hold the pennies or cents as a fraction in binary floating point, it is unlikely to be exact.

    For example, when decimal 1.51 is converted to a binary fraction the result is:

      1.1000_0010_1000_1111_0101_1100_0010_1000_1111_0101_1100_0010_1000_1111_0101_1100_0010_1000_1111_0101...
    
    which you can see is a repeating binary fraction, some part of which will be rounded off.

    Most decimal fractions are like this... so before any errors can be introduced by rounding and what not in any arithmetic you go on to do, most decimal fractions have a built-in "representation" error.

    Much of the time you won't see the "representation" error, because conversion back to decimal rounds it off. This can lead to bafflement when two values look the same when printed out, but fail  $x == $y.

    Addition and subtraction are the more difficult floating point operations, so you're more likely to see the problems there. Consider:

    print 1.09 - 1, "\n" ; print "0.84 - 0.34 == ", 0.84 - 0.34, ( 0.84 - 0.34 == 0.5 ? " ==" : " but !=" ), " 0.5\n" ;
    which gives:
      0.0900000000000001
      0.84 - 0.34 == 0.5 but != 0.5
    
    it really makes me wonder why we persist in using binary floating point for decimal arithmetic !

      It really makes me wonder why we persist in using binary floating point for decimal arithmetic !
      Well, it really make me wonder why programming languages (and OSses) have no problem using layers up layers, yet still let their basic datatypes be determined by the hardware the machine is running on.

      Take Perl for instance. It provides (almost unlimited sized) strings as a basic data type, despite strings not being native to the hardware, or even a basic type in C. It doesn't force the programmer to cast numerical values between integers, longs, floats or doubles. It prides itself it takes care of gritty details and doesn't bother the programmer with it.

      If if a programmer is surprised that '0.84 - 0.34 == 0.5' isn't true, we scold at him, for being an ignorant person, not knowing the internal hardware representation of the data.

      IMO, that sucks. If I wanted to program in such a way that I have to consider the internal hardware representation, I can code in C. I wish Perl had arbitrary precision integers, and could add/subtract/compare decimal numbers without losing precision.

        I wish Perl had arbitrary precision integers, and could add/subtract/compare decimal numbers without losing precision.

        Decimal floating point is certainly doable... I did it years ago on a Z80. It's not what you'd call cheap, run-time-wise -- though with a 64 bit processor and fast multiply/divide instructions, it's probably a whole lot less painful than it used to be.

        If there isn't one already, I'd be amused to do a "DecFlt" equivalent of "BigInt". The first decision is whether to do that as big precision floating point, or indefinite precision fixed point.

        From a scientific programming perspective, binary floating point is better than any other radix. IBM machines used to use radix 16, to save time and hardware during normalisation and alignment shifts. Unfortunately, that saving came at the cost of the wonderfully named "wobbling precision". With 24 bits of precision, radix 16 floating point results are good to +/- any one of 2^-24, 2^-23, 2^-22 or 2^-21, depending on the value of the leading base 16 digit !