Re^2: what did I just see..?
by pryrt (Abbot) on Mar 22, 2021 at 17:37 UTC
|
Sorry, sectokia, but I don't think that adding twice the machine_epsilon is the cure-all you really think it is.
Per the documentation, machine_epsilon is the maximum relative error ("Machine::Epsilon - The maximum relative error while rounding a floating point number"), but you are using it in an absolute fashion, which means that you are not properly scaling the error. The error possible is different for 1_000_000 vs 1 vs 0.000_001... it's even different for 2 vs 1 vs 1/2. So that won't help you "correct" the cumulative floating point errors. Since you are using values from 0.8 down to 0.1, you will be in the ranges [0.5,1.0), [0.25,0.50), [0.125,0.25), and [0.0625,0.125), which actually have four different ULP sizes.
Besides that, the cumulative errors will keep getting bigger, until your x+2*epsilon is no longer enough to increase it. Starting at 0.8 and decreasing by 0.01 each loop, by 70 iterations, the stringification of the "real" number and the stringification of your x+2*epsilon will not be matching each other.
The example in the spoiler shows both of those issues in more detail (where "diff" represents the real number, start - n*step, "x" represents the multiple individual subtractions, and "sectokia" represents the x+2epsilon).
Further, your x+2*epsilon assumes that your subtraction step size is slightly bigger than the real value; that is true for a step of 0.1 (1.00000000000000006e-01) or 0.01 (1.00000000000000002e-02), but for a step of 0.03 (2.99999999999999989e-02), the step size is smaller, so now your adjustment takes the string in the wrong direction. (Not shown in the spoiler code.) | [reply] [d/l] [select] |
|
|
For the range 0 to 1, then epsilon will always be greater than the error (as it would only scale smaller), but of course you are correct in that it should be scaled both up and down for a normalized solution.
You are also right about needing to apply it to each subtraction operation.
I don't agree with the bit around it being in the wrong direction if the step happens to be just under the desired ideal value. Print rounds down. If the float is +/- epsilon from ideal, then adding an epsiol brings it into range of 0 to +2 epsilon from ideal, which will round down to ideal. It doesn't matter if you started +ve or -ve from ideal.
| [reply] |
|
|
For the range 0 to 1, then epsilon will always be greater than the error
So while we have the example of 0.8 down to 0.1, the difference between the epsilon and the ULP won't be huge. But who knows whether the OP will eventually go beyond that range, and get upset when numbers near 2e-16 start displaying as near 4e-16. That's one of the reasons I was cautioning against applying epsilon in this case, because it might later be generalized to a condition where it's not even close to the real absolute error.
Print rounds down
that's not true, as syphilis showed. Here's another example.
C:\usr\local\share>perl -le "printf qq(%.17e => %s\n), $_, $_ for 2.99
+999999999999989e-02, 2.99999999999999503e-02, 2.99999999999999468e-02
+"
2.99999999999999989e-02 => 0.03
2.99999999999999503e-02 => 0.03
2.99999999999999468e-02 => 0.0299999999999999
Print rounds to nearest to the precision that print "likes" rounding to.
If the float is +/- epsilon from ideal, then adding an epsiol brings it into range of 0 to +2 epsilon from ideal, which will round down to ideal.
Again, no.
C:\usr\local\share>perl -MMachine::Epsilon -le "for (2.999999999999999
+89e-02, 2.99999999999999503e-02, 2.99999999999999468e-02) { printf qq
+(%.17e => %-30s x+2eps = %s\n), $_, $_, $_+2*machine_epsilon}"
2.99999999999999989e-02 => 0.03 x+2eps = 0.0
+300000000000004
2.99999999999999503e-02 => 0.03 x+2eps = 0.0
+300000000000004
2.99999999999999468e-02 => 0.0299999999999999 x+2eps = 0.0
+300000000000004
Notice that x+2epsilon prints a number bigger than 0.03, whereas the value of x is slightly less than 0.03 (by 1 ULP).
| [reply] [d/l] [select] |
|
|
Print rounds down
As a generalization this is not true.
Even when it is true for some value $x, it will be untrue for -$x.
However, there are times when print() rounds up for positive values.
Consider the following (perl-5.32.0, nvtype is "double"):
use strict;
use warnings;
my $d = 0.4444444444444446;
printf "%a\n", $d;
print "$d\n";
print "print() has rounded upwards\n"
if "$d" > $d;
__END__
Outputs:
0x1.c71c71c71c71fp-2
0.444444444444445
print() has rounded upwards
And how do I calculate the value of this "epsilon" that is being mentioned ?
Cheers, Rob
| [reply] [d/l] |
|
|
|
|
|
Re^2: what did I just see..?
by LanX (Saint) on Mar 22, 2021 at 02:12 UTC
|
> No one seems to have actually told you how to fix this.
I did, I said calculate in cents if you want that precision in the end
Even when using one single division to a float in the final output, it'll print correctly.
DB<3> for ($x=80; $x > 10; $x -= 1) { say $x/100; }
0.8
0.79
0.78
0.77
0.76
0.75
0.74
0.73
0.72
0.71
0.7
... yadda
0.21
0.2
0.19
0.18
0.17
0.16
0.15
0.14
0.13
0.12
0.11
DB<4>
| [reply] [d/l] |
|
|
Thats not a solution, its sticking your head in the sand.
You'll end up coming unstuck at certain values. If are really are going to use cents, use it, and do integer division.
| [reply] |
|
|
> You'll end up coming unstuck at certain values
Show me!
| [reply] |