Decimal fixed point and floating point arithmetic in Python

This article will explain how decimal fixed-point and floating-point arithmetic work in Python, which is useful for performing accurate calculations in various applications.

Numbers in Python can be stored in two ways: floating-point and decimal fixed-point. Floating-point numbers are fast but can sometimes be imprecise because of how computers store them in binary. On the other hand, Decimal fixed-point numbers are more accurate and useful when working with precise calculations.

By the end of this article, you will understand how both types of numbers work, when to use them, and how to write programs using them.

Floating-Point Arithmetic

Floating-point numbers use the float type in Python. Let's see what happens when we add 0.1 and 0.2 using floating-point arithmetic ?

x = 0.1 + 0.2  
print(x)

The output of the above code is ?

0.30000000000000004

The result is not exactly 0.3 as we might expect. This happens because floating-point numbers are stored in binary format, which cannot precisely represent all decimal numbers.

Decimal Fixed-Point Arithmetic

The Decimal class from Python's decimal module provides exact decimal representation. This solves the precision issues we see with floating-point numbers ?

from decimal import Decimal

# Create Decimal objects and perform addition
x = Decimal("0.1") + Decimal("0.2")  
print(x) 

The output of the above code is ?

0.3

Notice how Decimal gives us the exact result we expect. When creating Decimal objects, always use strings to avoid floating-point conversion errors.

Setting Precision with Decimal

You can control the precision of decimal calculations using getcontext().prec. This is especially useful for financial calculations ?

from decimal import Decimal, getcontext

# Set precision to 4 decimal places
getcontext().prec = 4  

price = Decimal("19.99")
quantity = Decimal("3")
total = price * quantity

print(f"Price: ${price}")
print(f"Quantity: {quantity}")
print(f"Total: ${total}")

The output of the above code is ?

Price: $19.99
Quantity: 3
Total: $59.97

Comparison Example

Let's compare both approaches side by side to see the difference in precision ?

from decimal import Decimal

# Floating-point arithmetic
float_result = 0.1 + 0.2

# Decimal arithmetic
decimal_result = Decimal("0.1") + Decimal("0.2")

print(f"Floating-point result: {float_result}")
print(f"Decimal result: {decimal_result}")
print(f"Are they equal? {float_result == float(decimal_result)}")

The output of the above code is ?

Floating-point result: 0.30000000000000004
Decimal result: 0.3
Are they equal? False

When to Use Each Type

Aspect Floating-Point Decimal Fixed-Point
Speed Faster execution Slower but more precise
Precision May have rounding errors Exact decimal representation
Use Cases Scientific computing, graphics Financial calculations, currency
Memory Usage Less memory More memory overhead

Conclusion

Use floating-point numbers for general mathematical operations where small precision errors are acceptable. Use Decimal for financial applications and when exact decimal representation is required. Always pass strings to Decimal() constructor to avoid precision loss.

Updated on: 2026-03-25T05:08:27+05:30

2K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements