Even then I wouldn't care much ...
Yeah, I'm not greatly fussed either. But it's something I haven't really thought much about, it's a bit interesting, and I haven't been looking at it quite right.
I've only just now got my head around how to determine what the correct result for a**b (for integer b) is when rounding gets involved, and I don't mind elaborating if asked.
Otherwise I'll shut up and leave it for a more appropriate forum.
Cheers, Rob | [reply] |
| [reply] |
Consider yourself asked :)
Heh ... you might be expecting something more erudite than what I can deliver :-)
I was getting bogged down in examining different approaches that were giving different results, and scratching my head over what was the right thing to do. And then it occurred to me that all I had to do was resort to some pretty basic clear thinking.
So there's nothing too profound here, and I'm really just admitting to a haziness that was probably afflicting no-one but myself.
Sticking to base 2 arithmetic for the moment, for a double precision value such as
5.34 or, in binary:
1.0101010111000010100011110101110000101000111101011100e2 raised to the integer power of (say) 100, there can be only one correct result - namely the result you get when you multiply the original value by itself 99 times, such that no rounding is performed (ie to 5300-bit precision), and then round that 5300-bit result back to 53 bits.
If you allow other results, then you admit to there being more than one correct answer - which would be to ask more of arithmetic than it is generally prepared to give.
The good "pow" algorithms won't follow that exact procedure of course, but that final 53-bit value is the value that they ought to return - and the value that does get returned by both perl (mostly) and mpfr for the various cases I've tested. So ... for 5.34 ** 100, the result (as calculated by both mpfr_pow and the multiplication chain) came out at
1.1001101101000111011101111001101101001011101110000100e241 which equates to the 15 digit value of
5.67708900158395e72
And my perl-5.20.0 (NV is double), perl came up with the the identical value for 5.34 ** 100.
(However, in 35 of the 3000 test cases I ran, perl was out by 1 ULP - but that's not bad for perl.)
For decimal arithmetic, it's essentially the same scenario - the value returned for 1.00001 ** 6e7 should be the same as yielded by multiplying the decimal 1.000001 by itself 59999999 times (without any rounding being done) and then rounding it down to the appropriate precision.
Luckily, in practice you can usually do *some* rounding without jeopardising the accuracy of the final rounded-down result, and I'm sure that the good "pow" algorithms make full use of that potential - as did LanX's Math::BigFloat approach.
Cheers, Rob
| [reply] |
> Otherwise I'll shut up and leave it for a more appropriate forum.
point is ... I only roughly remember my CS lessons on error propagation and I not very keen to open old books again. ... ;)
If I had the time I would try to reproduce your results with Math::BigFloat cause I don't trust much into those C libraires.
Please keep in mind a method can have a better error margin (ie worst case scenario) while an alternative approach is statistically far better (in the middle).
it's even more complicated if you start discussing which input set of calculations should be considered representative for such a statistical investigation.
for instance x**6e7 ... seriously? :P
| [reply] |