Using this calculator, I can see how the decimal “.8” and the expression “.7 + .1” have different representations:
.8 = 1.1001100110011001100110011001100110011001100110011010 *2-1
.7 + .1 = 1.1001100110011001100110011001100110011001100110011001 *2-1
^^
But what mechanism causes “0.8” to be printed for the top value? E.g. alert(.8)
in JavaScript. Why does it not print something like “0.800000011920929”?
Is this a feature of IEEE 754 or the programming language implementing it?
6
The paper How to Print Floating-Point Numbers Accurately by Guy L. Steele Jr. and Jon L White describes one approach to the problem of printing numbers.
Quoting from that paper:
What is the correct number of digits to produce if the user doesn’t specify? If a system prints too many digits, the excess digits may be “garbage,” reflecting more information than the number actually contains; if a system prints too few digits, the result will be wrong in a stronger sense: converting the decimal representation back to
binary may not recover the original binary value.
Which echos delnan’s comment: “What would you expect to be printed?”
As far as I can tell, how IEEE 754 numbers are printed is a feature of the programming language.
The Steele and White paper explain some techniques for printing floating point numbers accurately, and they, or something similar, may be implemented in JavaScript.
3
Most implementations round floating point numbers by default to 6 significant digits (removing any trailing zeros) when converting them to strings, as that seems to match the expectations of most users.