Per the documentation, machine_epsilon is the maximum relative error ("Machine::Epsilon - The maximum relative error while rounding a floating point number"), but you are using it in an absolute fashion, which means that you are not properly scaling the error. The error possible is different for 1_000_000 vs 1 vs 0.000_001... it's even different for 2 vs 1 vs 1/2. So that won't help you "correct" the cumulative floating point errors. Since you are using values from 0.8 down to 0.1, you will be in the ranges [0.5,1.0), [0.25,0.50), [0.125,0.25), and [0.0625,0.125), which actually have four different ULP sizes.
Besides that, the cumulative errors will keep getting bigger, until your x+2*epsilon is no longer enough to increase it. Starting at 0.8 and decreasing by 0.01 each loop, by 70 iterations, the stringification of the "real" number and the stringification of your x+2*epsilon will not be matching each other.
The example in the spoiler shows both of those issues in more detail (where "diff" represents the real number, start - n*step, "x" represents the multiple individual subtractions, and "sectokia" represents the x+2epsilon).
Further, your x+2*epsilon assumes that your subtraction step size is slightly bigger than the real value; that is true for a step of 0.1 (1.00000000000000006e-01) or 0.01 (1.00000000000000002e-02), but for a step of 0.03 (2.99999999999999989e-02), the step size is smaller, so now your adjustment takes the string in the wrong direction. (Not shown in the spoiler code.)
In reply to Re^2: what did I just see..?
by pryrt
in thread what did I just see..?
by ishaybas
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |