'True precision of a number

Suppose I have the following code:

0.7 / 100

I would expect to get 0.007 but instead I am getting 0.006999999. I know this is a float precision error but how do I get the expected value?

I tried:

Decimal(0.7 / 100)
Decimal(0.7)/int(100)

But can’t seem to get it to work. Tried searching but can’t seem to phrase it right to get a good result.



Solution 1:[1]

from decimal import Decimal

>>> number = Decimal('0.7') / 100
>>> float(number)
0.007

Solution 2:[2]

0.7 is a floating point literal, so creates a binary floating point value regardless of what you do with it.

The decimal module can create decimal values exactly, but you need to pass the constructor a string instead, or build the result out of other exact values:

For example,

>>> import decimal
>>> decimal.Decimal("0.7") / 100
Decimal('0.007')
>>> decimal.Decimal(7) / 1000
Decimal('0.007')

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 richardec
Solution 2 Tim Peters