Photo by Erol Ahmed on Unsplash

Dealing with Decimal Numbers in Python

Claudio Salvatore Arcidiacono
3 min readMay 24, 2021

--

Python is used for a myriads of use cases. In some of them, like Data Science applications, losing some decimal points will not make a big difference. However, in other applications, like financial applications, loss of precision is not acceptable.

In this article I would like to give some best practices on how to properly manipulate arbitrarily big Decimal numbers in python without loss of precision.

Is it really a problem?

In the first example we will try to round the number 1.5.

As we would have expected, it is rounded to 2. So far so good. What happens if we now round 0.5?

0.5 is rounded to 0. The most unexperienced of you, might be surprised by this behaviour. However, the default implementation of the round function in python implements a rounding strategy called bankers' rounding, or round half to even. This means that decimal numbers that end with a 5 are rounded to the closest even digit, in this case 0 . Here are some examples that can better help you understanding how bankers' rounding works.

Now that you have understood bankers' rounding, it is time for a simple test, what happens if we round the number 1.05 to 1 decimal digit? It should be rounded to 1.0 right? Let's try it in a python shell.

WRONG! it is rounded to 1.1.

In order to understand why 1.05 is rounded to 1.1 we need to look a little bit deeper.

As you might remember from your Computer Science studies, decimal numbers are usually represented in computers in floating point notation. So, the decimal part is represented as a linear combination negative powers of 2.

For example 0.5 is represented as 1*2^-1 + 0*2^-2 0*2^-3 ... 0*2^-n. This means that the decimal number 0.5 can be perfectly represented in floating point notation. The number 1.05 is not perfectly represented in floating point notation using a precision of 64 bits (the standard floating point precision used in python). In order to unveil the real representation of 1.05 we have to use a little trick:

The code above formats the number 1.05 so that the first 100 significant decimal digits of its real representation are showed.

As you can see, the 64 bit floating point representation of the number 1.05 is a little bit bigger than 1.05 and for this reason it is rounded to 1.1.

How to properly round decimal numbers

In order to perform numeric operations with decimal numbers without losing precision, python standard library offers the module decimal.

Here is an example of how to properly round the number 1.05 to 1 decimal digit using the decimal module:

The decimal module

Rounding numbers is just the beginning of what the decimal module can do. All of the mathematical operations are also available. In the examples below you can for see some approximation errors that can be caused by using float instead of Decimal.

Another powerful tool of the decimal module is the Context object.

By Changing the default context we can, for example, change the default rounding strategy:

In the code above, we created a local context (lines 5–9) inside which the rounding strategy is set up to be ROUND_HALF_UP. So, whenever the rounding digit is 5 or more, the number is rounded up.

The default precision of the decimal module is set up to 28. This means that,if the number of digits of your decimals is less than or equal to 28 you should be fine. If you need more precision, you can change the precision using the Context class. For Example:

For more information on what can you do with the decimal module, please check the official documentation.

When not to use the decimal class

The decimal class allows us to perform operations with decimal numbers without losing precision. This increased precision comes at a performance cost. Here is a performance comparison between using float and decimal:

https://gist.github.com/ClaudioSalvatoreArcidiacono/7c69098fe7898e689032fe99a83b16bc.js (Embed did not work here. I will fix it later, in the meantime, the link to the gist is below)

https://gist.github.com/ClaudioSalvatoreArcidiacono/7c69098fe7898e689032fe99a83b16bc

Using float instead of Decimal is around 100 times faster. This means that for applications that are not precision critical, float should be preferred to Decimal.

Thank you very much for reading this story! I hope you learned something new and that you enjoyed it.

--

--

Claudio Salvatore Arcidiacono

Italian, DevOps Engineer @ING. Passionate about Guitar, IT and Coffee