## Is it correct to state that the first number that collide in single precision is 131072.02? (positive, considering 2 digits as mantissa)

sum of two floating point numbers
floating point numbers and mantissa
what is mantissa with example
negative mantissa
mantissa exponent meaning
floating point numbers with non zero exponents are called numbers
normalized binary number
can floating-point numbers be negative

I was trying to figure it out for my audio application if `float` can be used to represent correctly the range of parameters I'll use.

The "biggest" mask it needs is for frequency params, which is positive, and allow max two digits as mantissa (i.e. from 20.00 hz to 22000.00 hz). Conceptually, the following digits will be rounded out, so I don't care for them.

So I made this script to check the first number that collide in single precision:

```float temp = 0.0;
double valueDouble = 0.0;
double increment = 1e-2;

bool found = false;
while(!found) {
double oldValue = valueDouble;
valueDouble += increment;
float value = valueDouble;

// found
if(temp == value) {
std::cout << "collision found: " << valueDouble << std::endl;
std::cout << "   collide with: " << oldValue << std::endl;
std::cout << "float stored as: " << value << std::endl;
found = true;
}

temp = value;
}
```

and its seems its `131072.02` (with `131072.01`, stored as the same `131072.015625` value), which is far away than 22000.00. And it seems I would be ok using float.

But I'd like to understand if that reasoning is correct. It is?

The whole problem would be if I set a param of XXXXX.YY (`7 digits`) and it collides with some other numbers having a less number of digits (because `single precision` only guarantee `6 digits`)

Note: of course numbers such as `1024.0002998145910169114358723163604736328125` or `1024.000199814591042013489641249179840087890625` collide, and they are within the interval, but they do it at a longer significative digits than my required mantissa, so I don't care.

Numerical Accuracy, How many significant digits are there in a single precision? 2. Use input (read) statements. assignment operator In Java, the equals sign is called the _____. initialized A variable is said to be _____ the first time a value is placed in the variable. x = y = z is a legal statement in Java In this statement, first the value of z is assigned to y, and the the new value is assigned to x.

Your program does not verify exactly what you want, but your underlying reasoning should be ok.

The problem with the program is that valueDouble will accumulate slight errors (since 0.01 isn't represented accurately) - and converting the string "20.01" to a floating point number will introduce slight round-off errors.

But those errors should be on the order of DBL_EPSILON and be much smaller than the error you see.

If you really wanted to test it you would have to write "20.00" to "22000.00" and scan them all using the scanf-variant you plan to use and verify that they differ.

Fundamentals of Data Representation: Floating point numbers , Consider some thoughts employing the Pigeonhole principle: 2. An IEEE 754 single-precision float has a 24-bit mantissa. Well, log10 gives you the number of decimal digits. log10(224) is 7.2, or a @RudyVelthuis Good point. Is it correct to state that the first number that collide in single precision is  Floating point rounding errors. 0.1 cannot be represented as accurately in base-2 as in base-10 due to the missing prime factor of 5. Just as 1/3 takes an infinite number of digits to represent in decimal, but is "0.1" in base-3, 0.1 takes an infinite number of digits in base-2 where it does not in base-10.

Is it correct to state that the first number that collide in single precision is 131072.02? (positive, considering 2 digits as mantissa after the decimal point)

Yes.

I'd like to understand if that reasoning is correct. It is?

For values just less than 131072.0f, each successive representable `float` value is 1/128th apart.

For values in the range [131072.0f ... 2*131072.0f), each successive representable `float` value is 1/64th apart.

With values of the decimal textual form "131072.xx", there are 100 combinations, yet only 64 differ `float`. It is not surprising that 100-64 or 36 collisions occurs - see below. For numbers of this form, this is the first place the density of `float` is too sparse: the least significant bit in `float` > 0.01 in this range.

```int main(void) {
volatile float previous = 0.0;
for (long i = 1; i <= 99999999; i++) {
volatile float f1 = i / 100.0;
if (previous == f1) {
volatile float f0 = nextafterf(f1, 0);
volatile float f2 = nextafterf(f1, f1 * 2);
printf("%f %f %f    delta fraction:%f\n", f0, f1, f2, 1.0 / (f1 - f0));
static int count = 100 - 64;
if (--count == 0) return 0;
}
previous = f1;
}
printf("Done\n");
}
```

Output

```131072.000000 131072.015625 131072.031250    delta fraction:64.000000
131072.031250 131072.046875 131072.062500    delta fraction:64.000000
131072.046875 131072.062500 131072.078125    delta fraction:64.000000
...
131072.921875 131072.937500 131072.953125    delta fraction:64.000000
131072.937500 131072.953125 131072.968750    delta fraction:64.000000
131072.968750 131072.984375 131073.000000    delta fraction:64.000000
```

Why floating-points number's significant numbers is 7 or 6 may also help.

What's the difference between a single precision and double , Is it correct to state that the first number that collide in single precision is 131072.02? (positive, considering 2 digits as mantissa). I was trying to figure it out for  Start studying Micro final - chapter 14. Learn vocabulary, terms, and more with flashcards, games, and other study tools.

Single precision floating-point format, Can also represent binary numbers in scientific notation: 1.0 × 2-3 each unnecessary leading 0 is replaced by another significant digit to the right of the decimal point A Single-Precision floating-point number occupies 32-bits, so there is a With this representation, the first exponent shows a "larger" binary number,  Start studying Econ final. Learn vocabulary, terms, and more with flashcards, games, and other study tools.

c - Why floating-points number's significant numbers is 7 or 6, First, know that binary numbers can have, if you'll forgive my saying so, a decimal The single precision floating point unit is a packet of 32 bits, divided into three The number 1.01011101 * 25 is positive, so this field would have a value of 0. Say we have the decimal number 329.390625 and we want to represent it  Solutions to the Activities in Understanding Numbers in Elementary School Mathematics H. Wu July 31, 2011 P. 12. Assume the usual terminology that 100 is one hundred, 1000 is one thou-sand, etc. (a) Now make believe that you are explaining to a third grader and explain why the 3 in 352 stands for 300, the 5 stands for 50, and the 2 stands for 2

Floating Point Numbers, Note that (β = 2,p = 24) may be less accurate than (β = 10,p = 10). Now let's turn to floating point with 15 decimal digits, but internally it uses 64-bit floating point. IEEE 754 is a binary standard that requires β = 2,p = 24 (number of mantissa rounding error as above means that say h = 3√macheps is a good choice for h. Mean, Median, and Mode. Finding the mean, also known as averaging numbers, is a very useful thing to know how to do, especially when you need a precise estimate or a very accurate generalization. Means and medians are not exact numbers; however, they are based on a series of exact numbers, therefore they are precise.

• "i.e. from 20.00 hz to 22000.00 hz" - What you should be using is an integer in centihertz. That integer will have the range `[2000, 2200000]`, and no rounding problems, whatsoever. Why? Because `float` cannot even represent `0.2` correctly!
• "max two digits as mantissa (i.e. from 20.00 hz to 22000.00 hz)". Eh, no, that's not two digits mantissa. `22000.00 Hz` would be `2.200000E5`, so mantisssa is 7 digits, and the exponent is one digit. Except for the small bit where computers use base 2 and bits, not base 10 and digits.
• The "error accumulation" doesn't matter, as the result (as printed) accounts for that error. Error accumulation only means that `131072` won't be reached in exactly `13107200` steps.
• @MSalters: The error accumulation does matter (in general, not necessarily with the specific numbers in this case). It means the values in `valueDouble` will not be the numbers the OP intended to test. In theory, there could be some point where the numbers being tested should be a and b but the errors result in testing a+e0 and b+e1, and a and b collide while a+e0 and b+e1 do not, or vice-versa, so the result is wrong. The test for a collision is done prior to printing anything, so conversion for output has no effect on the tests.
• @EricPostpischil: That doesn't make sense. `x` and `x+0.01` collide, as soon as `x+0.01` differs only in the 25th bit. Recall how addition works: the smallest number has its exponent adjusted to the largest number, shifting its mantissa to the right. As soon as that shift is 25 places, you're adding 0, and you have a collision. All numbers `x` with that same magnitude will cause the same "shift 25 places, add 0"
• @MSalters: First, if you have `x` and `x+.01`, they do not differ by .01, since `x+.01` necessarily has a rounding error in binary floating-point. So `x` and `x+0.01` might differ in the 25th bit while `x` and `x`+.01 do not differ in the 25th bit. Second, even if both the pair `x` and `x+.01` and the pair `x` and `x`+.01 differ in the 25th bit, `x` has some accumulated error, and it is standing in place of the ideal y we would have at this point in the program, and y and y + .01 might not differ in the 25th bit even though `x` and `x`+.01 do, or vice-versa.