Fast Inverse Square Root

February 1, 2026

cs gaming

FISR

Fast Inverse Square Root is an algorithm that computes an approximation of 1/x1/\sqrt{x}, the inverse square root of a 32-bit floating-point number. The algorithm most notably helped optimize graphics calculations in the 1999 video game Quake III Arena.

A unit vector is a vector that has been scaled to have a length of 1. Game engines normalize vectors so they have unit length while preserving direction. Unit vectors make dot products, reflections, and direction computations stable and comparable.

To find a normal vector vnormv_{norm}, we can multiply the vector v=(x,y,z)v=(x,y,z) by the inverse of the vector’s magnitude, which is the square root of the dot product of the vector with itself.

vnorm=vv=v1vv=v1x2+y2+z2v_{norm} = \frac{v}{\|v\|} = v \cdot \frac{1}{\sqrt{v\cdot v}} = v \cdot \frac{1}{\sqrt{x^2+y^2+z^2}}

The motivation for this algorithm is to compute this inverse square root quickly. The following C implementation includes the original comments from the Quake III source code:

float Q_rsqrt( float number )
{
	long i;
	float x2, y;
	const float threehalfs = 1.5F;

	x2 = number * 0.5F;
	y  = number;
	i  = * ( long * ) &y;                       // evil floating point bit level hacking
	i  = 0x5f3759df - ( i >> 1 );               // what the fuck?
	y  = * ( float * ) &i;
	y  = y * ( threehalfs - ( x2 * y * y ) );   // 1st iteration
//	y  = y * ( threehalfs - ( x2 * y * y ) );   // 2nd iteration, this can be removed

	return y;
}

IEEE 754

The IEEE 754 standard specifies a 32-bit float as having a sign bit, 8 exponent bits, and 23 significand bits. We can ignore the sign bit for now, as the algorithm only works with real numbers, and x\sqrt{x} is only defined for x0x \geq 0.

IEEE754

The exponent bits are biased with a value of 127 representing zero. Therefore exponents range from −126 to +127 (1 to 254 in the exponent field), since 0 and 255 are reserved for special numbers.

The significand bits have an implicit leading bit with value 1. The remaining 23 bits represent the fractional part of the significand, meaning that the significand is always in the range [1.0,2.0)[1.0, 2.0). Thus, a normalized floating-point number xx can be represented as:

x=(1)Sx2ex(1+mx)x=(-1)^{S_{x}}\cdot 2^{e_{x}}(1+m_{x})

Bit Hack

Since the algorithm operates on the bit-level representation of floating-point numbers, we need to reinterpret the bits of a float as an long. This is done using pointer casting in C:

i  = * ( long * ) &y;

This line takes the address of the float variable y, casts it to a pointer to a long, and then dereferences it to get the bit representation of the float as a long integer.

Logarithm Approximation

The core of the algorithm is treating the float’s exponent as a base-2 logarithm, halving it to approximate a square root, then minimizing the error of the approximation using a constant.

Since mx[0,1)m_{x}\in [0,1), we can approximate log2(1+mx)mx+σ{\log_{2}(1+m_{x})\approx m_{x}+\sigma}, where σ\sigma is an arbitrary constant. Setting σ=0\sigma = 0 makes the approximation exact at mx=0m_x = 0 and mx=1m_x = 1, but leads to larger interior error.

An optimal approximation of σ\sigma can be found using techniques such as least squares or minimax approximation.

Let’s alias a float to an integer to get a rough approximation of its logarithm. Starting with the float xx:

x=2ex(1+mx)log2(x)=ex+log2(1+mx)log2(x)ex+mx+σ\begin{aligned} x &=2^{e_{x}}(1+m_{x}) \\ \log_{2}(x) &=e_{x}+\log_{2}(1+m_{x}) \\ \log_{2}(x) &\approx e_{x}+m_{x}+\sigma \end{aligned}

Using this intermediate approximation and interpreting the float xx as an integer IxI_x, i.e., I_x = * ( long * ) &x;, we have:

Ix=223Ex+Mx=223(ex+127+mx)=223(ex+mx+σ+127σ)223log2(x)+223(127σ)log2(x)Ix223(127σ).\begin{aligned} I_{x}&=2^{23}E_{x}+M_{x} \\ &=2^{23}(e_{x}+127+m_{x}) \\ &=2^{23}(e_{x}+m_{x}+\sigma +127-\sigma ) \\ &\approx 2^{23}\log _{2}(x)+2^{23}(127-\sigma ) \\ \log_{2}(x) &\approx {\frac {I_{x}}{2^{23}}}-(127-\sigma ). \end{aligned}

Now, we can use this approximation and the logarithm identity:

logb ⁣(1x)=logb ⁣(x1/2)=12logb(x)\log_{b}\!\left(\frac{1}{\sqrt{x}}\right) = \log_{b}\!\left(x^{-1/2}\right) = -\tfrac{1}{2}\,\log_{b}(x)

to calculate y=1xy={\frac {1}{\sqrt {x}}}:

log2(y)=12log2(x)Iy223(127σ)12(Ix223(127σ))Iy32223(127σ)12Ix\begin{aligned} \log_{2}(y)&=-{\tfrac{1}{2}} \log_{2}(x) \\ \frac{I_{y}}{2^{23}}-(127-\sigma)&\approx -{\frac{1}{2}}\left({\frac{I_{x}}{2^{23}}}-(127-\sigma)\right) \\ I_{y}&\approx {\tfrac{3}{2}} \cdot 2^{23}\,(127-\sigma)-{\tfrac{1}{2}}\,I_{x} \end{aligned}

This approximation for IyI_y is implemented in the line:

i  = 0x5f3759df - ( i >> 1 );

where the constant 0x5f3759df is derived from 32223(127σ){\tfrac{3}{2}} \cdot 2^{23}\,(127-\sigma) with the author’s chosen optimal σ0.0450466\sigma \approx 0.0450466. It is not know how this specific value was determined by the original authors.

Newton’s Method

The initial approximation can be refined using Newton’s method, which is an iterative method for finding successively better approximations to the roots of a function. Given an approximation yny_n for yy, then a better approximation yn+1y_{n+1} is given by ynf(yn)f(yn)y_{n}-{\tfrac {f(y_{n})}{f'(y_{n})}}, where f(yn)f'(y_{n}) is the derivative of f(y)f(y) at yny_n.

Newton's Method

We want to approximate y=1/xy = 1/\sqrt{x}, which can be reformatted as finding the root of the function f(y)=1y2xf(y) = \frac{1}{y^2} - x. Then we can apply Newton’s method:

f(y)=1y2xf(y)=2y3yn+1=ynf(yn)f(yn)=yn+yn32(1yn2x)=yn(32x2yn2),{\begin{aligned}f(y)&={\frac {1}{y^{2}}}-x\\f'(y)&=-{\frac {2}{y^{3}}}\\y_{n+1}&=y_{n}-{\frac {f(y_{n})}{f'(y_{n})}}\\&=y_{n}+{\frac {y_{n}^{3}}{2}}\left({\frac {1}{y_{n}^{2}}}-x\right)\\&=y_{n}\left({\frac {3}{2}}-{\frac {x}{2}}y_{n}^{2}\right),\end{aligned}}

An iteration of this method is implemented in the line:

y  = y * ( threehalfs - ( x2 * y * y ) );

Example

Let’s approximate the inverse square root of 4 using the Fast Inverse Square Root algorithm. The IEEE 754 representation of 4 is 0x40800000 in hexadecimal. Shift right by one bit to get 0x20400000.

Then subtract this from the magic constant 0x5f3759df to get 0x3ef759df. Reinterpreting 0x3ef759df as a float gives us an initial approximation of y0.48310754y \approx 0.48310754.

Performing one iteration of Newton’s method gives us y0.49915357y \approx 0.49915357, which is very close to the actual value of y=14=0.5y = \frac{1}{\sqrt{4}} = 0.5.