What is the exact value of the first type-punning in fast square root inverse algorithm?

fast inverse square root explained
fast inverse square root c++
fast inverse square root calculator
fast inverse square root python
fast inverse square root javascript
fast square root c
fast inverse square root c#
fixed-point inverse square root

Having this code, (this is a well-known fast square root inverse algorithm, that was used in Quake 3) I am unable to understand the print output. I understand the entire algorithm superficially but want to get an in-depth understanding. What is the value of i when printf prints it? This gives 1120403456 for me. Does it depend on the computer's architecture? I've read somewhere that type-punning like this results in undefined behaviour. On the other website I've read that the value of i at that point is the exact value of bits that are used by this variable. I am confused with this and honestly expected the value of i to be 100. How do I interpret the resulting 1120403456? How do I convert this value to decimal 100? Are those bits encoded somehow? This is the code excerpt:

#include<stdio.h>

int main()
{
float x = 100;
float xhalf = 0.5f*x;
int i = *(int*)&x;
printf("%d", i);
i = 0x5f3759df - (i>>1);
x = *(float*)&i;
x = x*(1.5f-xhalf*x*x);
return x;

}

The value of i printed after int i = *(int*)&x; is the bit representation of the floating-point 100.0, which is what x was initialised to. Since you're using the %d format in printf it will print that representation as a decimal integer.

The bitpattern of 100.0 in an IEEE 32-bit float is 0x42c80000, which is 1120403456 in decimal

Fast inverse square root, Fast inverse square root, sometimes referred to as Fast InvSqrt() or by the hexadecimal This results in the first approximation of the inverse square root of the input. According to the C standard, reinterpreting a floating point value as an integer For example, σ = 0 yields exact results at both ends of the interval, while σ  This is the first approximation of the inverse square root of the input. Treating the bits again as a floating-point number, it runs one iteration of Newton’s approximation method, yielding a more precise approximation.

Talk:Fast inverse square root/Archive 1, inverse = reciprocal. sqrt is a function. An inverse function is defined here My position is that "outputs an approximation of" is correct because of the code", it has two casts, first to an integer and then back to a floating point value: The FPU gets beaned on the head to pun the type and the FP register stacks have to go all  The proper way to accomplish the fast inverse square root, in a more standard conforming way, is to type-pun floating point values and integers through a union type. It should be noted that type-punning with a union type is considered undefined behavior in C++ standards.

It is much older than Quake III, although the exact magic constant has varied a bit.

The encoding of a finite single-precision floating point number according to IEEE-754 (which is what most modern computers use) is

     31 30           23 22                                          0
     ├─┼─┬─┬─┬─┬─┬─┬─┬─┼─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┬─┤
     │S│   Exponent    │                   Mantissa                  │
     └─┴───────────────┴─────────────────────────────────────────────┘

The highest bit, S is the sign. The value is negative if it is set, positive otherwise.

If Exponent and Mantissa are both zero, then the value is either 0.0 (if S is zero) or -0.0 (if S is 1).

If Exponent is zero, but Mantissa nonzero, you have a denormal number very close to zero.

Representations where Exponent is 255 (all bits set) are reserved for infinity (if Mantissa is zero), and not-a-number (if Mantissa is nonzero).

For Exponent values from 1 to 254, inclusive, Mantissa contains the fractional bits of the mantissa, with an implicit 1 in the integers position (This is what the (1) means in e.g. 0 10000001 (1)1011011011011011011011.) In other words, the value Mantissa represents for these Exponent values, is from 1.000000000000000000000002 (1.0 in decimal) to 1.111111111111111111111112 (1.99999994 in decimal), inclusive.

Now, if we consider only nonnegative values (with S clear), with E = Exponent (between 1 and 254, inclusive), and M the logical value of Mantissa (between 1.0 and 1.99999994), then the value V the single-precision floating point represents is

        V = 2E - 127 × M

The 127 is the exponent bias. The square root of V is

        √V = 2(E - 127)/2 × √M = 2E/2 - 127 × 263 × √2 × √M

and the inverse square root, which is the target operation here,

        1/√V = 2(127 - E)/2 × 1/√M = 2-E/2 - 127 × 263 × √2 × 1/√M

noting that 2127/2 = 263.5 = 263 × √2.

If we take the integer representation, and shift it one bit to the right, we effectively halve E. To multiply by 263, we need to add 63 to the exponent; 63×223. However, for the square root operation, instead of multiplying by √2 × √M, we effectively just multiply by M/2. This means that that (shifting the integer representation one bit right, then adding 63 to the exponent) yields an estimate of a square root that is off by a factor between 0.7071067 and 0.75 for arguments greater than 1.0, and by a factor between 0.7071067 and 1448.155 for values between 0 and 1. (It yields 5.421×10-20 for √0, and 0.75 for √1.)

Note that for the inverse square root operation, we wish to negate E.

It turns out that if you shift the integer representation right by one bit, and then subtract that from (1 01111100 1101110101100111011111)2 (1597463007 in decimal, 0x5f3759df in hexadecimal, a single-precision floating-point approximation of √2127 ≈ 13043817825332782212), you get a pretty good approximation of the inverse square root (1/√V). For all finite normal arguments, the approximation is off by a factor of 0.965624 to 1.0339603 from the correct value; a very good approximation! All you need is one or two iterations of Newton's method for the inverse square root operation on top. (For f(X) = 0 iff X = 1/√V you need f(X) = 1/X^2 - V. So, each iteration in X is X - f(X)/f'(X) = X × (1.5 - 0.5 × V × X × X. Which should look quite familiar.)

Continued Fractions - An introduction, 2 The General form of a Simple Continued Fraction 6.3 All square-root continued fractions eventually repeat The first number, a, is special as it is the whole number part of the value. was an exact value then its CF is exactly [2; 1, 7]; if it was only an approximation then [2; But note on the positive side (yes, a pun!) that  The problem is that for many functions, like square root, for exact value this sum has infinite number of members, it does not end at some x^n. But, if we stop at some x^n we would still have a result up to some precision. So, if we have: y = 1/sqrt(x)

Here be Dragons - The Interesting Realm of , I say at least because even if on my architecture I got the exact value of 0.3 its integral part is 0 thus the first binary digit after decimal point of 0.3 is still a warning: dereferencing type-punned pointer will break strict-aliasing rules We have the fast-inverse-square-root trick as a powerful example of that. With my " $101$ Dalmatians " algorithm, I can get $5$ exact digits with $1$ iteration. and $15$ exact digits with only $2$ Newton/Heron iterations. I use linear interpolation and second-order polynomial to find a very good initial guess. I use also a "lookup" in a simple table from $1$ to $101$, with $4$ digits accuracy.

A celebration of code – 6 pieces of code that had an impact, The first example in this article is a story of extraordinary circumstances and the need in Project Mercury, NASA aimed higher (pun intended) with the Apollo missions. Packed in a form factor the size of a suitcase with a modest power draw of 55 yet vitally important piece of code, the fast inverse square root algorithm. Square root algorithm to find the square root of 2685 Example: Square-root of 2685 First, always group the numbers in pairs starting from right to left and it is OK if there is only one number left in the leftmost position.

[PDF] Square Root, of math algorithms, it's inevitable that I show you a bit of the style I like to use. That's fine Most programmers do the first kind of programming, but the need is​  Since input is limited to positive integers between 1 and 10 10, I can use a well-known fast inverse square root algorithm to find the inverse square root of the reciprocal of the input.

Comments
  • That pointer cast does not magically convert float to int. It casts the pointer type and supposedly outputs that bit pattern in int format. The value of int i cannot be 100 (unless there is a rare matching example). Every line after printf is irrelavant.
  • I obviously know that every line after printf is irrelevant. What does it mean that it outputs bit "pattern" in int format? How does such pattern look like? Where can I read more about bit patterns in particular types?
  • So why post them? What you have is a 32-bit value which is interpreted differently in its context.
  • This square-root kludge relies on the implementation using the IEEE-754 basic 32-bit binary floating-point format, one description of which is here. The code reinterprets bytes in a way that is not supported by the current C standard. It could be corrected by using a union or by copying bytes. The particular value 0x5f3759df is the encoding for the floating-point value 13211836172961054720, which has an exponent of 63 and a significand of 0x1.6eb3be.
  • Understanding the kludge requires an understanding of the floating-point format and certain mathematics. I am sure there are explanations on the web, possibly on Stack Overflow. On any hardware with a square-root instruction, the kludge is obsolete.
  • So If I convert 0x42c80000 to binary and split the resulting number according to en.wikipedia.org/wiki/Single-precision_floating-point_format (and by that I mean to sign bit, significand and exponent), will I get 100 after conversion to IEEE-754 ?
  • Yes. You can use some online floating-point representation tool to see that the bit patterns are identical. x and i contain the same bit pattern.
  • Thank you for help
  • Thank you for help, your answer helped me a lot!