Python Decimal Module - Undesired Float-Like Output?
Solution 1:
Try specifying the numbers as strings
>>> Decimal('0.10') * Decimal('0.10') - Decimal('0.0100')
>>> Decimal('0.000')
Solution 2:
The float
literal 0.10
is not precisely the mathematical number 0.10
, using it to initialize Decimal
doesn't avoid the float
precision problem.
Instead, using strings to initialize Decimal
can give you expected result:
x = Decimal('0.10') * Decimal('0.10')
y = Decimal(x) - Decimal('0.010')
Solution 3:
This is a more detailed explanation of the point made in existing answers.
You really do need to get rid of the numeric literals such as 0.1
if you want exact decimal arithmetic. The numeric literals will typically be represented by IEEE 754 64-bit binary floating point numbers.
The closest such number to 0.1 is 0.1000000000000000055511151231257827021181583404541015625. Its square is 0.01000000000000000111022302462515657123851077828659396139564708135883709660962637144621112383902072906494140625, which is not the same as the closest to 0.01, 0.01000000000000000020816681711721685132943093776702880859375.
You can get a clearer view of what is going on by removing the prec =2
context, allowing more precise output:
from decimal import *
q = Decimal(0.01)
x = Decimal(0.10) * Decimal(0.10)
y = Decimal(x) - Decimal(q)
print(q)
print(x)
print(y)
Output:
0.01000000000000000020816681711721685132943093776702880859375
0.01000000000000000111022302463
9.020562075127831486705690622E-19
If you had used string literals, as suggested by the other responses, the conversion to Decimal would have been done directly, without going through binary floating point. Both 0.1 and 0.01 are exactly representable in Decimal, so there would be no rounding error.
Post a Comment for "Python Decimal Module - Undesired Float-Like Output?"