Background
I hate the way .Net/IEEE-754 handles equality of floating-point numbers (FPNs) (i.e. double, float). It requires the programmer to be prescient with respect to the yet-to-be-determined history of the number, as the programmer must choose a ‘reasonable’ value of epsilon where frequently no such value can be determined (as the programmer cannot determine ahead of time how many operations, hence roundings, the number will be subject to).
I would like to create a ‘better’ FPN within .Net. At creation, one would set its initial scientific error (or accept a default; the default being the minimum). When one performed operations on the FPN, those operations would update the scientific error (to account for the effect of rounding). When testing the equality of two FPNs, one could determine if their ranges overlapped (and even the probability that they are the same number).
Question
The code itself is simple, my questions are:
- How do operations (+ – * / to start with) affect the scientific error of a FPN? Are there formulae for determining this scientific error?
- I understand that an added complication is the uneven distribution of FPNs. This can obviously be accounted for by scaling the error in proportion to the inherent scientific error of the result; but is there a formula for determining the inherent scientific error present in a given FPN?
10
It has already been done, see Interval arithmetic. This type of automatic floating-point error analysis gives you a reliable upper bound on the accumulated rounding error. Perhaps the biggest disadvantage is the Dependency problem which can lead to correct but way too conservative error bounds.
3
Start by reading Hamming’s classic book Numerical Methods for Scientists and Engineers. He goes into a great deal of detail on the nature of floating-point numbers and of numbers in science in general. (Interesting tidbit: Floating point numbers are not uniformly distributed, and neither are the fundamental constants of physics, but the the two distributions are similar.)
In any computational problem, you have to have some idea what constitutes a “good” answer. In simplest form: what value of epsilon is appropriate? For example, for problems with distances measured in astronomical units (1 AU = 93 million miles, approximately), epsilon of 1 mile corresponds to 1E-8 relative error. Depending on what you are doing, that may or may not be enough.