in reply to what did I just see..?
No one seems to have actually told you how to fix this. Basically if you are subtracting two floating point numbers (which you are) and then rounding (which is what print does - downward) then the upper bound of the error in the result will be twice the machines epsilon. So to fix this your code need to be this:
use Machine::Epsilon; for (my $x=0.8; $x > 0.1; $x -= 0.01) { print "".($x+(2*machine_epsilon()))."\n"; }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: what did I just see..?
by pryrt (Abbot) on Mar 22, 2021 at 17:37 UTC | |
Per the documentation, machine_epsilon is the maximum relative error ("Machine::Epsilon - The maximum relative error while rounding a floating point number"), but you are using it in an absolute fashion, which means that you are not properly scaling the error. The error possible is different for 1_000_000 vs 1 vs 0.000_001... it's even different for 2 vs 1 vs 1/2. So that won't help you "correct" the cumulative floating point errors. Since you are using values from 0.8 down to 0.1, you will be in the ranges [0.5,1.0), [0.25,0.50), [0.125,0.25), and [0.0625,0.125), which actually have four different ULP sizes. Besides that, the cumulative errors will keep getting bigger, until your x+2*epsilon is no longer enough to increase it. Starting at 0.8 and decreasing by 0.01 each loop, by 70 iterations, the stringification of the "real" number and the stringification of your x+2*epsilon will not be matching each other. The example in the spoiler shows both of those issues in more detail (where "diff" represents the real number, start - n*step, "x" represents the multiple individual subtractions, and "sectokia" represents the x+2epsilon).
edit: add my output from above code /edit
Further, your x+2*epsilon assumes that your subtraction step size is slightly bigger than the real value; that is true for a step of 0.1 (1.00000000000000006e-01) or 0.01 (1.00000000000000002e-02), but for a step of 0.03 (2.99999999999999989e-02), the step size is smaller, so now your adjustment takes the string in the wrong direction. (Not shown in the spoiler code.) | [reply] [d/l] [select] |
by sectokia (Friar) on Mar 24, 2021 at 10:17 UTC | |
For the range 0 to 1, then epsilon will always be greater than the error (as it would only scale smaller), but of course you are correct in that it should be scaled both up and down for a normalized solution.
You are also right about needing to apply it to each subtraction operation. I don't agree with the bit around it being in the wrong direction if the step happens to be just under the desired ideal value. Print rounds down. If the float is +/- epsilon from ideal, then adding an epsiol brings it into range of 0 to +2 epsilon from ideal, which will round down to ideal. It doesn't matter if you started +ve or -ve from ideal. | [reply] |
by pryrt (Abbot) on Mar 24, 2021 at 17:42 UTC | |
For the range 0 to 1, then epsilon will always be greater than the error So while we have the example of 0.8 down to 0.1, the difference between the epsilon and the ULP won't be huge. But who knows whether the OP will eventually go beyond that range, and get upset when numbers near 2e-16 start displaying as near 4e-16. That's one of the reasons I was cautioning against applying epsilon in this case, because it might later be generalized to a condition where it's not even close to the real absolute error.
Print rounds down that's not true, as syphilis showed. Here's another example.
Print rounds to nearest to the precision that print "likes" rounding to.
If the float is +/- epsilon from ideal, then adding an epsiol brings it into range of 0 to +2 epsilon from ideal, which will round down to ideal. | [reply] [d/l] [select] |
by syphilis (Archbishop) on Mar 24, 2021 at 12:47 UTC | |
As a generalization this is not true. Even when it is true for some value $x, it will be untrue for -$x. However, there are times when print() rounds up for positive values. Consider the following (perl-5.32.0, nvtype is "double"): And how do I calculate the value of this "epsilon" that is being mentioned ? Cheers, Rob | [reply] [d/l] |
by pryrt (Abbot) on Mar 24, 2021 at 17:29 UTC | |
by syphilis (Archbishop) on Mar 25, 2021 at 01:01 UTC | |
| |
|
Re^2: what did I just see..?
by LanX (Saint) on Mar 22, 2021 at 02:12 UTC | |
I did, I said calculate in cents if you want that precision in the end Even when using one single division to a float in the final output, it'll print correctly.
Cheers Rolf | [reply] [d/l] |
by Anonymous Monk on Mar 22, 2021 at 08:48 UTC | |
| [reply] |
by LanX (Saint) on Mar 22, 2021 at 11:07 UTC | |
Show me!
Cheers Rolf | [reply] |