Basically, to determine what the smallest delta for a given floating-point, decompose that value's ieee representation into sign, fraction, and exponent. My perl's NV is double-precision (53bit precision): from MSB to LSB, sign bit, 11 exponent bits, and 52 fractional bits (plus an implied 1), for sign*(1+fract_bits/2^52)*(2**exp), where (which my perl's NV is), the smallest change would be 2**(-52+exp). So you should just need to know your exp for the current representation (not forgetting to subtract 1023 from the 11bit exponent number). If your NV is more precise (lucky you), just adjust it appropriately.
What I'm unsure about is whether if you take a denormal number, and add a small fraction, whether it re-normalizes it before doing the addition, or whether it's limited by its existing denormal exponent.
The last time floating point came up (in Integers sometimes turn into Reals after substraction), I started work on a module that will expand an ieee754 double-precision float into sign*(1+fract)*2^exp, based on the internal representation. Unfortunately, that module isn't ready for prime-time. But I'll still link you to a Expand.pm development copy, along with debug.pl (which will eventually become my .t file(s) -- but for now, shows you how I currently can use my functions). This may or may not help you delve deeper into the problem. Right now it's focused on 53-bit precision... and hampered by the fact that I want it to work on a machine at $work that is limited to perl 5.6, so I cannot use the > modifier for pack/unpack) but the same ideas should work for you...
update: change urls to a more "permanent" location
In reply to Re: Determining the minimum representable increment/decrement possible?
by pryrt
in thread Determining the minimum representable increment/decrement possible?
by BrowserUk
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |