in reply to shocking imprecision

The foregoing is one of the reasons why COBOL (and other languages, as well) implemented binary coded decimal (BCD) arithmetic.   When numbers are expressed in this way, a base-10 number is literally base-10, with both the digits and the sign being expressed by groups of (usually) 4 or 8 bits.   The arithmetic is performed in base-10 to a definite (and fixed) precision.   Every modern microprocessor has hardware-level support for it.

In fact, in COBOL, the default is always to perform decimal arithmetic, unless the COMP[UTATIONAL] directive is specified.

Another strategy, used e.g. by Microsoft Access and its JET database engine, is scaled integers.   The Currency data type in that system is implemented by a true binary integer whose value is assumed to be multiplied by 10,000.   This gives accurate, base-10-float compatible operations with exactly 4 digits of fixed precision, without actually using floating-point.