## Why is floating point arithmetic in C# imprecise?

floating point issues in c
what every computer scientist should know about floating-point arithmetic
arithmetic operations on floating point numbers
floating point multiplication
floating point rounding error
what is a floating-point operation
list four floating-point operations that cause nan to be created
floating point math problem

Why does the following program print what it prints?

```class Program
{
static void Main(string[] args)
{
float f1 = 0.09f*100f;
float f2 = 0.09f*99.999999f;

Console.WriteLine(f1 > f2);
}
}
```

Output is

```false
```

Floating point only has so many digits of precision. If you're seeing f1 == f2, it is because any difference requires more precision than a 32-bit float can represent.

Why are floating point calculations so inaccurate?, errors over repeated operations will add up unless corrected. In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times.

The main thing is that this isn't just .Net: it's a limitation of the underlying system most every language will use to represent a float in memory. The precision only goes so far.

You can also have some fun with relatively simple numbers, when you take into account that it's not even base ten. 0.1, for example, is a repeating decimal when represented in binary.

Floating-point arithmetic, Why floating point numbers are only approximation of a real number? Possible Duplicate: Why is floating point arithmetic in C# imprecise? Console.WriteLine(0.5f * 2f); // 1 Console.WriteLine(0.5f * 2f - 1f); // 0 Console.WriteLine(0

In this particular case, it’s because .09 and .999999 cannot be represented with exact precision in binary (similarly, 1/3 cannot be represented with exact precision in decimal). For example, 0.111111111111111111101111 base 2 is 0.999998986721038818359375 base 10. Adding 1 to the previous binary value, 0.11111111111111111111 base 2 is 0.99999904632568359375 base 10. There isn’t a binary value for exactly 0.999999. Floating point precision is also limited by the space allocated for storing the exponent and the fractional part of the mantissa. Also, like integer types, floating point can overflow its range, although its range is larger than integer ranges.

Running this bit of C++ code in the Xcode debugger,

float myFloat = 0.1;

shows that myFloat gets the value 0.100000001. It is off by 0.000000001. Not a lot, but if the computation has several arithmetic operations, the imprecision can be compounded.

imho a very good explanation of floating point is in Chapter 14 of Introduction to Computer Organization with x86-64 Assembly Language & GNU/Linux by Bob Plantz of California State University at Sonoma (retired) http://bob.cs.sonoma.edu/getting_book.html. The following is based on that chapter.

Floating point is like scientific notation, where a value is stored as a mixed number greater than or equal to 1.0 and less than 2.0 (the mantissa), times another number to some power (the exponent). Floating point uses base 2 rather than base 10, but in the simple model Plantz gives, he uses base 10 for clarity’s sake. Imagine a system where two positions of storage are used for the mantissa, one position is used for the sign of the exponent* (0 representing + and 1 representing -), and one position is used for the exponent. Now add 0.93 and 0.91. The answer is 1.8, not 1.84.

9311 represents 0.93, or 9.3 times 10 to the -1.

9111 represents 0.91, or 9.1 times 10 to the -1.

The exact answer is 1.84, or 1.84 times 10 to the 0, which would be 18400 if we had 5 positions, but, having only four positions, the answer is 1800, or 1.8 times 10 to the zero, or 1.8. Of course, floating point data types can use more than four positions of storage, but the number of positions is still limited.

Not only is precision limited by space, but "an exact representation of fractional values in binary is limited to sums of inverse powers of two." (Plantz, op. cit.).

0.11100110 (binary) = 0.89843750 (decimal)

0.11100111 (binary) = 0.90234375 (decimal)

There is no exact representation of 0.9 decimal in binary. Even carrying the fraction out more places doesn’t work, as you get into repeating 1100 forever on the right.

Beginning programmers often see floating point arithmetic as more accurate than integer. It is true that even adding two very large integers can cause overflow. Multiplication makes it even more likely that the result will be very large and, thus, overflow. And when used with two integers, the / operator in C/C++ causes the fractional part to be lost. However, ... floating point representations have their own set of inaccuracies. (Plantz, op. cit.)

*In floating point, both the sign of the number and the sign of the exponent are represented.

Is floating point math broken?, , it's easiest to do a conversion from a decimal fraction to a binary fraction. This is because double is a floating point datatype. If you want greater accuracy you could switch to using decimal instead. The literal suffix for decimal is m, so to use decimal arithmetic (and produce a decimal result) you could write your code as. var num = (3600.2m - 3600.0m); Note that there are disadvantages to using a decimal.

Why floating-point numbers aren't really real – The Craft of Coding, It's a problem caused by the internal representation of floating point numbers, which uses a fixed number of binary digits to represent a decimal number. Some​  Because floats suck. Hard. Consider 0.1+0.1 You’d think it’s 0.2 And you’d be WRONG. Because 0.1 isn’t 0.1 in float. Because you cannot actually represent that number in a finite binary float.

Floating-point arithmetic may give inaccurate result in Excel, Floating-point arithmetic may give inaccurate results in Excel. 05/21/2020; 8 minutes to read. Applies to: Excel 2010, Excel 2013, Excel for Office 365, Microsoft  Floating-point arithmetic is considered an esoteric subject by many people. This is rather surprising because floating-point is ubiquitous in computer systems.

Precision and accuracy in floating-point calculations, Describes the rules that should be followed for floating-point calculations. In a calculation involving both single and double precision, the result will not precision constant), y is as inaccurate as a single precision variable. Short answer: floating point representation (such as "double") is inherently inaccurate. So is fixed point (such as "decimal"), but the inaccuracy in fixed-point representation is of a different kind.