Accuracy of Python Floating Point Numbers
Floating point numbers are represented by binary (base 2) fractions in computer hardware. On most computing machines, floats are approximated using a binary fraction with the numerator using the first 53 bits starting with the most significant bit and with the denominator as a power of two.
In this python program, we are going to test the accuracy of the Python floating point numbers, by testing to see if 1 + delta = 1 for different values of delta.
Python Source Code: Floating Point Accuracy
delta = 1
while delta > 1e-20:
if 1 + delta == 1:
print('1 +', delta, " = 1")
flag = delta/0.1
break
else:
print('1 +', delta, " != 1")
delta=delta*0.1
print("\n\nAccuracy of Python floating point number is: ", flag)
Output
1 + 1 != 1 1 + 0.1 != 1 1 + 0.010000000000000002 != 1 1 + 0.0010000000000000002 != 1 1 + 0.00010000000000000003 != 1 1 + 1.0000000000000004e-05 != 1 1 + 1.0000000000000004e-06 != 1 1 + 1.0000000000000005e-07 != 1 1 + 1.0000000000000005e-08 != 1 1 + 1.0000000000000005e-09 != 1 1 + 1.0000000000000006e-10 != 1 1 + 1.0000000000000006e-11 != 1 1 + 1.0000000000000006e-12 != 1 1 + 1.0000000000000007e-13 != 1 1 + 1.0000000000000008e-14 != 1 1 + 1.0000000000000009e-15 != 1 1 + 1.000000000000001e-16 = 1 Accuracy of Python floating point number is: 1.0000000000000009e-15
From the above output, we can conclude that Python represents floating point numbers with precision of up to 15 significant digits.