I was reading this question and the accepted answer says that BigDecimal
is the best type for representing currency values. I’ve also seen several other questions support the use of big decimal.
Why is BigDecimal the best type? I don’t understand the relevance of arguments such as precise calculations or people with large net worth. I am pretty sure that there are no currencies that round to the 10th decimal place, and almost no one has 8-bytes worth of money (if it were to store it as a long).
For taxes and other intermediary calculations, I can see how it is important to make sure you are using a proper type that doesn’t round/truncate values prematurely, but that doesn’t seem relevant to me either since an error of 0.0000000001 cents on the dollar does not seem that big to me.
It may be due to my lack of understanding of the importance of accuracy when it comes to financial statements, so perhaps someone can also clarify whether a millionth of a cent is significant as a final number.
11
It all comes down to precision, and Java’s BigDecimal seems the correct answer on that platform because it gives you the best support for specifying and preserving what can be highly variable precision.
A millionth of a cent is only 10-5 from a shift in payable amount, and it’s not that hard to come up with a situation where that level of precision matters.
- Company A is publicly traded, with fifty million (5×106) shares outstanding and a current price of $10.
- Person B buys $1 of Company A, through a broker. They now own one-tenth of a share, or a one five-hundred-millionth of Company A. (10-7).
- Company A is found to be absurdly over-valued, and after a bit of a scandal winds up accepting a stock-swap purchase by Company C at a value of $1,000 (103), with each shareholder getting the equivalent number to be paid out equally to the shareholders in either cash or stock.
- How much cash or stock can Person B get? Note that if you get the number wrong, Person B (who happens to be an out-of-work lawyer in his 30s) can mess up the entire deal and possibly earn himself a paycheck by suing for his value lost plus legal fees.
Now, the valuation is fairly absurd on purpose, but the same “you need to get it right or it explodes” details even if the numbers are only off by a minuscule amount.
1
The right type to represent currency values depends of the application.
Two plausible choices are a type capable of exact arithmetic or a
floating point type. Please remember two facts:
-
In floating point arithmetic, usual algebraic identities (like
commutativity and associativity) does not hold any more. They still
hold in exact arithmetic. -
In exact arithmetic, it is not possible to work with functions
other than polynomials, so we cannot use the square root or the
exponential functions. Floating point arithmetics allows to use
them.
In a double accounting personal finance software, exact arithmetic is
preferrable. We expect all of the cashflows recorded to sum up to
zero. Since this is an algebraic identity, we can only verify it if
we use exact arithmetics. Here using a floating point would make the
whole principe of double accounting useless.
In an internal software used by a clearing house, exact arithmetic is
also mandatory, basically for the same reason as previously. There is
a conservation principle, so that cashflows should always sum up to
zero. Since the program has to satisfy an invariant of an algebraic
nature, it must rely on exact arithmetics.
In a pricing or risk management software implementing methods of mathematical finance, complex computations reminescent of
physics simulations are performed and expectations estimates are
computed. The very nature of this problem require the use of floating
point numbers.
4
I am going to make an assumption here because you don’t in your question.
Why is Bigdecimal the best? Because it’s better then the alternatives.
Why is Bigdecimal better then Float?
Because in a float you can not precisely hold a value like decimal 0.1 .
The binary representation of decimal 0.1 would be : binary 0.00011001100110011001100110011001 which if you convert it back to decimal would be: 0.0999999998603016138.
Now I want to check a >= 0.1 condition and see it fail………
5