in reply to Re: Determining the minimum representable increment/decrement possible?
in thread Determining the minimum representable increment/decrement possible?
Indeed. The previous post was done using 5.10.1; this is 5.22:
C:\Program Files>\Perl22\bin\perl.exe \perl22\bin\p1.pl printf "% 25.17g\n", -8.2727285363069939e-293;; -8.2727285363069883e-293 ## manually reali +gned to highlight the difference. [0]{} Perl>
Of more importance is the source of the numbers that apparently cannot be represented by 64-bit floating point. At least in perl.
They are produced & output by the C++ code I'm optimising. I originally thought that the C++ math was producing denormals and C++ was outputting them unnormalised, but that does not seem to be the case:
print join ' ', unpack 'a1 a11 a52', scalar reverse unpack 'b64', pack + 'd', $_ for -8.2727285363069939e-293, -8.2727285363069883e-293;; 1 00000110100 1010011010101110110010001111100101000001001101001100 1 00000110100 1010011010101110110010001111100101000001001101000111
Even the latest perl seems just to be getting the conversion wrong!? Both numbers are representable with 64-bit Ieee754 numbers, so why Perl should silently convert one to the other is something of a mystery. Basically, Perl's (or the underlying CRT) input routine just seems to be broken.
Update: The 5.22 perl I'm using is built with the same compiler and libraries as the C++ code; so this seems to be a PerlIO issue rather than a CRT issue.
|
|---|