If I do .1 + .1 + .1
in Python, I get 0.30000000000000004
. (I am not asking about Python in particular, and do not want Python specific answers.)
The only problem I can see with this is 0.30000000000000004 != 0.3
, so I need to take care in how I compare floats. Are there any other problems I need to be aware of with float rounding?
I can’t imagine that last digit being much of a problem in real life. For example, if I ask for a “0.30000000000000004 meter metal rod”, I’m not going to later complain that “this rod is 0.00000000000000004 meters too long!” (I can just cut the extra off, right? 🙂
Even with a lot of float calculations going on, I can’t imagine the rounding getting so far off that it matters. Am I missing something? When do floating point rounding errors really matter?
7
Variances in the “least significant digit” can cause the entire number to be rounded in the wrong direction.
Lets take an item that costs $0.705 – the half cent is from discount, or tax or something, or this is a 10% discount on something that is $7.05. Whatever the case… three of them and we’re computing the price (or discount). The total price is $2.115 and you need to round it appropriately to $2.12 (half up).
public class Round {
public static void main(String[] args) {
double item = 0.705;
double subtotal = item * 3;
System.out.println(subtotal);
double rounded = Math.round(subtotal * 100.0)/100.0;
System.out.println(rounded);
subtotal = 2.115; // what the subtotal *should* be
rounded = Math.round(subtotal * 100.0)/100.0;
System.out.println(rounded);
}
}
However, the output of this program is:
2.1149999999999998 2.11 2.12
And there, we’re off by a penny because of something that happened in the least significant digit of a floating point calculation.
Having previously worked on point of sales code that frighteningly had doubles scattered through them (I spent months converting them to an arbitrary fixed point system – I’m confident that I fixed that problem – at least in all the code I was looking at…), I can assure you that they are a real problem and the least significant digit being off by one is a big deal.
At first, this will be a “oh, its only a penny” but in this software, its about money… and people get very touchy about money being off. There are two parts to that – first is the confidence that the customer has when they see $0.705 * 3 = $2.11
when they know it should be $2.12
. The other part is that money adds up and over the course of days or weeks or months, it adds up to possibly sizable amounts when it isn’t being calculated properly. This often happens in tax calculations (thats an easy way to get awkward fractions) and the agencies that collect said taxes are much less sympathetic than the customer who is off by a cent… and can do nasty things like audits.
9
When you are dealing with things that are measured (speed, distance, weight etc.) rounding is OK as the accuracy of the original measurement is probably less than the accuracy of you calculation, and, users accept that the final number is still an estimate even if its a good estimate.
When you are dealing with things that are counted (money, people , ICBMs) rounding errors are a disaster. There is always an arithmetically correct answer, and, often the accuracy of this answer is enshrined in regulation, treaties and tax law. Providing a slightly wrong number gets you a free ticket into the Kafkaesque world of auditors, tax collectors and government inspectors; you may never escape with you sanity intact.
Even with a lot of float calculations going on, I can’t imagine the rounding getting so far off that it matters. Am I missing something? When do floating point rounding errors really matter?
The biggest problems arise with addition and subtraction. Given two numbers that are very close to one another in terms of magnitude, the difference or sum can lose precision (sometimes a lot), depending on whether they have the same or opposite signs. Given two numbers that are very far from one another in terms of magnitude, the smaller term can vanish into the bit noise.
This wreaks havoc with numerical techniques for solving initial value problems (aka numerical integration). Use a lousy technique such as Euler’s method and those double precision truncation errors mean the best you can do is a relative error of 10-8 to 10-6 — and that’s only if the interval of interest is small. Use a very good technique and you are lucky if see a relative error of 10-14 to 10-12, and once again that’s only if the interval of interest is small. Propagate the planets for millions of years and all bets are off, even with the very best of techniques. You need to use quad precision for those very long intervals of interest.