Question 11 in the Software Quality section of “IEEE Computer Society Real-World Software Engineering Problems”, Naveda, Seidman, lists fp computation as undesirable because “the accuracy of the computations cannot be guaranteed”.
This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren’t a problem (or shouldn’t be), small differences between two measurements at high speed are irrelevant – can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language.
The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable?
- The code contains three recursive functions (well that one is obvious).
- The computation of acceleration uses floating point arithmetic.
- All other computations use integer arithmetic.
- The code contains one linked list that uses dynamic memory allocation (second obvious problem).
- All inputs are checked to determine that they are within expected bounds before they are used.
3
The problem with floating point code is that it doesn’t behave like pen-and-paper decimal arithmetic calculations. This leads people who haven’t taken the time to understand what is going on to make incorrect assumptions about the behavior of floating point numbers, which can then lead to incorrect program behavior. The behavior of FP calculations with very large and very small numbers can be quite un-intuitive, especially if you mix the two.
IF your team understands floating point numbers and how they work, and IF your system’s design doesn’t include anything that will trip up FP calculations, then there is no inherent reason not to use them.
In a lot of cases it’s FAR easier to just put a blanket prohibition on FP, rather than try to insure that everyone understands what they are doing and that the numbers can’t get away from you in unexpected ways. The review team also may find it easier to verify that you’re doing the conversions and calculations correctly in integer rather than FP, because it’s easier to see where you’ll lose precision.
Floating point is only undesirable if you use the wrong method for comparing two values (an equals sign instead of a delta, or extreme ranges where the relative error (ulps) is too great). FP numbers are not bad for realtime systems per-se.
However, recursive functions and dynamic memory allocation are, for obvious reasons; the real-time performance of these can be difficult to predict.
3
Floating-point is generally undesirable in hard-real-time applications both due to accumulation of error and the fact that FP computation is typically slower than integer arithmetic. Then you also have the overhead with promoting integers to floating point (or vice versa) for comparison if you’re mixing the two. And since you can’t simply compare a floating-point number to zero, you turn a simple statement like if(x == 0)
into if(x > -0.0001 && x < 0.001)
— you’ve gone from one simple comparison to two comparisons and a logical operation. Not what you want to be wasting cycles on when people’s lives are on the line.