Recently I got to know the decimal
python module. While I got to understand how to work with this, I still have a question that persists:
What is/are the main case(s) where coders should use decimal.Decimal
for float comparison?
For example, at the past I have already performed some automatic tests for installable packages. At this ones, I always performed the comparison among floats with pytest.aprox()
or by comparing the absolute difference with an epsilon value.
Maybe my experience is still not enough to see which cases the decimal
module is better. Could anyone provide some insights to me?
2
The main reasons to use decimal.Decimal
over float
type:
1. High-Precision
decimal.Decimal
is better chice if you want to get more accurate calculations, and to avoid rounding errors (because of Floating-point error mitigation).
2. Exact Representation of Decimal Numbers
Because of binary nature of float numbers, they sometimes cannot exactly represent many decimal numbers, for example:
0.1 + 0.2 == 0.3 # False
But when using decimal.Decimal
:
from decimal import Decimal
Decimal("0.1") + Decimal("0.2") == Decimal("0.3") # True
3. Using on different platforms
float arithmetic operations results can be different across platform because different IEEE 754 implementation.