It's perfectly possible that a bad method sometimes returns a better result within the error margin.
Only if you can show this behavior for a high percentage of random numbers it'll show that the error approximation is maybe wrong.
Even then I wouldn't care much cause I needed a guaranteed result in affordable time for a pathological use case and NOT an optimal method.. :)
The world of numerical algorithms is complex and probably ~40 mult operations is already too slow to be accepted.
Chip designers prefer series expansions which can be easily wired as bit operations.
| [reply] |
Even then I wouldn't care much ...
Yeah, I'm not greatly fussed either. But it's something I haven't really thought much about, it's a bit interesting, and I haven't been looking at it quite right.
I've only just now got my head around how to determine what the correct result for a**b (for integer b) is when rounding gets involved, and I don't mind elaborating if asked.
Otherwise I'll shut up and leave it for a more appropriate forum.
Cheers, Rob
| [reply] |
| [reply] |
> Otherwise I'll shut up and leave it for a more appropriate forum.
point is ... I only roughly remember my CS lessons on error propagation and I not very keen to open old books again. ... ;)
If I had the time I would try to reproduce your results with Math::BigFloat cause I don't trust much into those C libraires.
Please keep in mind a method can have a better error margin (ie worst case scenario) while an alternative approach is statistically far better (in the middle).
it's even more complicated if you start discussing which input set of calculations should be considered representative for such a statistical investigation.
for instance x**6e7 ... seriously? :P
| [reply] |