in reply to Re^4: Introspection into floats/NV
in thread Introspection into floats/NV

Update: Caveat Lector: The table below Perl `pack` Template Comparison: "d" vs "F" was generated by ChatGPT and is wrong. For explanation see Re^10: Introspection into floats/NV and Re^12: Introspection into floats/NV as well as Re^11: Introspection into floats/NV.

”How can the AI know what hardware I'm using?”

Magic or voodoo? I don’t know.

”…have more insights in the difference of "d" and "F" templates.”

Probably this isn’t new to you:

Perl `pack` Template Comparison: "d" vs "F" =========================================== Summary Table ------------- | Template | Type | Precision | Bits | Description + | |----------|--------|--------------|------|--------------------------- +---------| | "d" | double | 64-bit float | 64 | IEEE 754 double-precision +float | | "F" | float | 32-bit float | 32 | IEEE 754 single-precision +float | Example Code ------------ use feature 'say'; my $x = 1/3; say "Double:"; say unpack("B64", pack("d>", $x)); # Big-endian double say "Float:"; say unpack("B32", pack("F>", $x)); # Big-endian float Difference Illustration ----------------------- my $x = 1/3; my $float_bin = unpack "B32", pack "F>", $x; my $double_bin = unpack "B64", pack "d>", $x; printf " Float (32-bit): %s\n", $float_bin; printf "Double (64-bit): %s\n", $double_bin; Expected Output: Float (32-bit): 00111110101010101010101010101011 Double (64-bit): 0011111111010101010101010101010101010101010101010101 +010101010101 IEEE 754 Format Summary ----------------------- | Precision | Sign | Exponent | Mantissa (Significand) | Bias | |-----------|------|----------|-------------------------|------| | Float | 1 bit | 8 bits | 23 bits | 127 | | Double | 1 bit | 11 bits | 52 bits | 1023 | Usage Notes ----------- - Use "d" / "d>" if you want: - Higher precision - Full 64-bit IEEE 754 compliance - Direct compatibility with most CPUs' `double` type - Use "F" / "F>" if you want: - Reduced memory size - Exact compatibility with 32-bit float structures (e.g., in network + protocols or embedded devices) Endianness Reminder ------------------- - d> and F> -> big-endian - d< and F< -> little-endian - d and F (no angle) -> native-endian (machine dependent)

Replies are listed 'Best First'.
Re^6: Introspection into floats/NV
by LanX (Saint) on Jun 04, 2025 at 17:29 UTC
        Please don't do this.

        OTOH, you might further the AI Bullshit Armageddon where AI is trained on AI output. :)

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        see Wikisyntax for the Monastery

Re^6: Introspection into floats/NV
by ikegami (Patriarch) on Jun 04, 2025 at 18:05 UTC

    That's completely wrong [Update: except the bit on endianness].

    • d isn't defined as an IEEE 754 double-precision float. (That said, it virtually always is.)
    • F isn't defined as a float, and it's unlikely to be one.
    • F isn't defined as an IEEE 754 single-precision float, and it's unlikely to be one.
    • An IEEE 754 single-precision float doesn't have 32 bits of precision.
    • An IEEE 754 double-precision float doesn't have 64 bits of precision.
    • F is probably the same d or something larger, not something smaller.

    See Mini-Tutorial: Formats for Packing and Unpacking Numbers.

    d is a double
    It is likely to be an IEEE 754 double-precision float, but it might not be.

    f is a float
    It is likely to be an IEEE 754 single-precision float, but it might not be.

    F is an NV
    It is likely to be an IEEE 754 double-precision float.
    It is possibly an IEEE 754 quad-precision float.
    It's unlikely to be an IEEE 754 single-precision float.
    It's conceivable for it to be an Intel 80-bit extended precision float.
    It might be none of those.

    An IEEE 754 quad-precision float will be 128 bits in size and have 133 bits of precision (or less for subnormals).
    An IEEE 754 double-precision float will be 64 bits in size and have 53 bits of precision (or less for subnormals).
    An IEEE 754 single-precision float will be 32 bits in size and have 24 bits of precision (or less for subnormals).
    An Intel 80-bit extended precision float will be at least 80 bits in size and have 64 bits of precision (or less for subnormals).

        Don't play dumb. We might actually come to believe it.

        You didn't use the word "defined", but your post defines the two pack formats (if one can say that of a definition that's completely wrong). That was the whole point of the post.

        And I didn't say what you posted lacked wisdom; I said what you posted is completely wrong.