in reply to Re^6: Decimal Floating Point (DFP) and does Perl needs DFP?
in thread Decimal Floating Point (DFP) and does Perl needs DFP?

proof of concept -> here

I don't think there's any guarantee that you'll get a more accurate result using that exponentiation approach.
I ran the below script (on perl-5.16 with NV of "double") and found that sometimes your approach gave the best approximation, other times the "left-to-right" variant of your approach gave the best approximation, and other times just doing the series of multiplications gave the best approximation. (See the comments in the script.)
I used the mpfr library as my reference for the correct 15 digit value - calculated with 1000 bits of precision (which is way overkill).
It's true that your approach uses fewer calculations, but some of those squarings stand to really amplify the errors produced by the roundings.
#!perl -l use strict; use warnings; use Math::MPFR qw(:mpfr); my $e=107; my $v = "5.34"; Rmpfr_set_default_prec(1000); mpfr($v, $e); print lanx($v, $e); # Wins for 5.34 ** 107 & 5.34 ** 100. print menezes($v, $e); # Wins for 5.231 ** 107 & 5.231 ** 100. print by_mul($v, $e); # Wins for 5.35 ** 107 & 5.35 ** 100. sub mpfr { my $val = shift; my $exp = shift; my $ret = Math::MPFR->new($val); $ret **= $e; Rmpfr_out_str($ret, 10, 15, MPFR_RNDN); printf "\n"; } sub lanx { my $val = shift; $val += 0; my $exp = shift; my $ret = 1; for my $bit (reverse split //,sprintf '%b',$exp) { $ret *= $val if $bit; $val **= 2; } return $ret; } sub menezes { my $val = shift; $val += 0; my $exp = shift; my $ret = 1; for my $bit (split //,sprintf '%b',$exp) { $ret **= 2; $ret *= $val if $bit; } return $ret; } sub by_mul { my $val = shift; $val += 0; my $exp = shift; my $ret = 1; $ret *= $val for 1..$exp; return $ret; }
I could be wrong, of course. (It's happened before ;-)

Cheers,
Rob

Replies are listed 'Best First'.
Re^8: Decimal Floating Point (DFP) and does Perl needs DFP?
by LanX (Saint) on Jan 20, 2015 at 15:10 UTC
    I don't really know much about MPFR.

    I needed a predictable accuracy for pow(6e7) of a decimal fraction. (a highly constructed use case)

    BigFloat is decimal thus avoiding binary rounding errors in this case and the limited number of multiplications allowed a boundary for the necessary precision.( 1 extra digit per multiplication)

    I have no doubt that other algorithms can be faster or more accurate with less digits.

    This whole thread is somehow spooky for me, financial transactions follow their own rules and rarely involve exponents bigger 100.

    Natural processes in science have no reason to be decimal. If a measurement is expressed as a decimal this already involves an errormargin. (bacterias don't grow exactly x%)

    I.e. decimals are normally human made!

    So if accuracy is a problem this can't be fully solved with a general purpose library.

    Eg IIRC do some fiscal authorities or banks calculate with special rounding units.

    That's not solved with 128 digits accuracy...

    Cheers Rolf

    PS: Je suis Charlie!

      I don't really know much about MPFR

      It's good for reference - if you want to be assured of correctness in assignment, rounding and calculation. I've just now used it to demonstrate to myself that one cannot guarantee that either the "left-to-right" or "right-to-left" exponentiation calculations are going to be as (or more) accurate than performing the requisite number of multiplications.

      The example I'll give is the calculation of 5.35 ** 107 - correct figure for which (to 15 decimal digits) is 8.58726126272004e77. Using double precision, the right-to-left method yields 8.58726126271995e77, left-to-right yields 8.58726126271996e77 and cumulatively multiplying 5.35 by itself 106 times yields 8.58726126271998e77 which is closest to the actual value.
      (Both perl and mpfr are returning the same results.)

      I.e. decimals are normally human made!

      But it's those human made numbers that we're normally interested in. When I multiply (on a computer) A by B I'm generally not really interested in finding the product of the base 2 approximations of A and B.
      I usually just want to know what A times B is.
      And it would surely seem strange to aliens that our human made machines that we use to process our human made decimal numbers in fact don't process *those* numbers.
      Doesn't seem strange to us of course - we're all well and truly accustomed to that just being a fact of life ;-)

      Cheers,
      Rob
        It's perfectly possible that a bad method sometimes returns a better result within the error margin.

        Only if you can show this behavior for a high percentage of random numbers it'll show that the error approximation is maybe wrong.

        Even then I wouldn't care much cause I needed a guaranteed result in affordable time for a pathological use case and NOT an optimal method.. :)

        The world of numerical algorithms is complex and probably ~40 mult operations is already too slow to be accepted.

        Chip designers prefer series expansions which can be easily wired as bit operations.

        Cheers Rolf

        PS: Je suis Charlie!