Normal to an implicit surface

\( \renewcommand{\vec}[1]{ \mathbf{#1} }  \)

Formal definition

The normal \( \overrightarrow n \) at a position \( \vec p(x,y,z) \in \mathbb R^3\) of a distance field \( f:\mathbb R^3 \rightarrow \mathbb R\) is computed with the gradient \( \nabla f \in \mathbb R^3 \):

$$ \overrightarrow{ n }( \vec p)  = \frac{ \nabla f(x, y, z) }{ \| \nabla f(x, y, z) \| }$$

The gradient is a vector of the partial derivatives of \( f \) in \( x\), \( y\) and \( z\):

$$ \nabla f(x,y,z) = \left[ \frac{\partial f(x,y,z)}{\partial x} , \frac{\partial f(x,y,z)}{\partial y} , \frac{\partial f(x,y,z)}{\partial z} \right] = \left[ f_x(x,y,z), f_y(x,y,z), f_z(x,y,z) \right] $$

Note that sometimes implicit surface are represented with a compactly supported distance field in this case the distance field \( f \) usually has decreasing values from the center of the primitive. Therefore the gradient will point towards the opposite direction of the normal: \( \overrightarrow n = - \nabla f / \| \nabla f \| \)

Concrete application

If the above does not make any sens to you, here is a concrete example. Say you want to compute the normal vector of a sphere of center \( \vec c \in \mathbb R^3\) and radius \( r \):

$$ \begin{array}{lll} f(\vec p(x,y,z)) & = & (\| \vec p - \vec c \|- r)^2 \\ & = & \| \vec p - \vec c \|^2 - r^2  \\ & = & ((x - c_x)^2 + (y - c_y)^2 + (z - c_z)^2) - r^2  \end{array}$$

Then you need to compute \( \nabla f = (f_x, f_y, f_z) \):

$$
\begin{array}{lll}
f_x(x,y,z) & = & (2).(1).(x-c_x)^{(2-1)} + 0 + 0 - 0  = 2(x-c_x)\\
f_y(x,y,z) & = & 2(y-c_y) \\
f_z(x,y,z) & = & 2(z-c_z)
\end{array}
$$

In code:

float sphere(vec3 p, vec3 center, float radius){
    return squared_length( p - center ) - radius*radius;
}
vec3 sphere_gradient(vec3 p, vec3 center){
    return vec3( 2.0*(p.x - center.x), 2.0*(p.y - center.y), 2.0*(p.z - center.z));
}
vec3 sphere_normal(vec3 p, vec3 c){
    return normalize(sphere_gradient(p, c));
}

Finite difference

Say your function \( f \) is not defined with a mathematical formula, for instance the values \(f(\vec p_1)\), \(f(\vec p_2)\), \( \cdots \), \(f(\vec p_n)\) etc. are stored in a 3D texture. Or perhaps you are lazy and don't want to differentiate \( f \). There is still a way to find \( \nabla f = (f_x, f_y, f_z) \) without knowing what's inside \( f \). To this end you can use what we call finite differences. Recall how you differentiate a 1D function \( g: \mathbb R \rightarrow \mathbb R\) according to \( x \) with the so called first order central difference:

$$ g'(x) = \frac{g(x+h) - g(x-h) }{2h} $$

Because our function \( f \) is multivariate (i.e. has several variables) the finite difference in \( x\), \( y\) and \( z\) are:

$$ f_x(x,y,z) = \frac{f(x+h,y,z) - f(x-h,y,z) }{2h} $$ $$ f_y(x,y,z) = \frac{f(x,y+h,z) - f(x,y-h,z) }{2h} $$ $$ f_z(x,y,z) = \frac{f(x,y,z+h) - f(x,y,z-h) }{2h} $$

Where h is usually a small number (e.g. 0.00001). This is called numerical differentiation, it is an approximation of the true value of \(f_x\), however it enables you to differentiate any function.

In code:

// For simplicity inlining center and radius
float sphere(vec3 p){
    vec3 center(0.0, 1.0, 0.0); 
    float radius = 20.0;
    return squared_length( p - center ) - radius*radius;
}
vec3 sphere_gradient(vec3 p, vec3 c, float r){
    float h = 0.00001;
    // Note that the function sphere() can actually be replaced by any 
    // function representing a distance field, since this approximation is universal
    float fx = (sphere(vec3(p.x+h, p.y, p.z)) - sphere(vec3(p.x-h, p.y, p.z))) / (2.0*h);
    float fy = (sphere(vec3(p.x, p.y+h, p.z)) - sphere(vec3(p.x, p.y-h, p.z))) / (2.0*h);
    float fz = (sphere(vec3(p.x, p.y, p.z+h)) - sphere(vec3(p.x, p.y, p.z-h))) / (2.0*h);
    return vec3( fx, fy, fz);
}
vec3 sphere_normal(vec3 p){
    return normalize(sphere_gradient(p));
}

Note: Obviously for a sphere you could simplify the computation a lot with \( \overrightarrow{ n }( \vec p)  = normalized( \vec p - \vec{center})  \) but the point was to demonstrate how to generally compute a normal and gradient for any distance field.

Optimized finite difference

In the above paragraph we need to evaluate the function of the distance field sphere() 6 times to compute the normal. If this distance field does not actually represent a sphere but a combination of thousands of spheres then each call to sphere() can be quite expensive. There is another way to compute the normal with only 4 evaluations of \( f \). This is done using another finite difference scheme, the sheme is used as follows:

$$
\overrightarrow{ n }( \vec p)  = normalized \left ( \sum_i^{n=4} {\vec k_i . f(\vec p(x,y,z) + \vec k_i . h )} \right )
$$

With \( \vec k_i \in \mathbb R^3 \) the four following points in space:
$$
\vec k_i = \left \{ \vec k_1 = (1,-1,-1), \vec k_2 = (-1,-1,1), \vec k_3 = (-1,1,-1), \vec k_4 = (1,1,1)\right \}
$$

If you trust this formula then it translates to the following optimized code:

vec3 sphere_normal(vec3 p){
    float e = 0.00001;
    // Again sphere() can be any distance field function or combination of distance fields.
    return normalize( Vec3( 1, -1, -1) * sphere( p + Vec3( e, -e, -e) ) +
                      Vec3(-1, -1,  1) * sphere( p + Vec3(-e, -e,  e) ) +
                      Vec3(-1,  1, -1) * sphere( p + Vec3(-e,  e, -e) ) +
                      Vec3( 1,  1,  1) * sphere( p + Vec3( e,  e,  e) ) );
}

Although you can use this formula as is, for curious people I leave below how this can be infered. I was taught this trick by Íñigo Quílez. He kindly explained to me the mathematical reasoning behing it which I'm transcribing here. 

Here we use directional derivates. Similarly to partial differentiation where we differentiate along one axis (\(x\), \(y\), \(z\), etc) we can differentiate along a specific vector \( \vec v \). In addition, directional derivatives can be expressed as a finite difference. The definition of the directional derivative \( \nabla_{\vec v}{f} : \mathbb R^n \rightarrow \mathbb R \) according to \( \vec v \) is as follows:

$$
\begin{array}{lll}
\nabla_{\vec v}{f}(\vec p) & = & \nabla f(\vec p ) \cdot \vec v \phantom{---}\small{\text{ (dot product between the gradient and the direction)}}\\
& = & \lim_{h \rightarrow 0}{ (f(\vec p + h\vec v ) - f(\vec p)) / {h} }
\end{array}
$$

The trick consists in doing a weighted sum of four directional derivatives of \(f\):

$$
\begin{array}{lll}
\overrightarrow{ m } & = & \sum\limits_i^{n=4} \vec k_i \nabla_{\vec k_i}{f}(\vec p) \\
& = &  4 \nabla f(\vec p)
\end{array}
$$

Because we carefully chose four directions \( k_i \) pointing towards the four vertices of a tetrahedron, terms of this formula cancel out and simplify to the gradient of \( f \)!

First, let's see how the formula is related to finite difference:

$$
\begin{array}{lll}
\overrightarrow{ m } & = & \sum\limits_i^{n=4} \vec k_i \nabla_{\vec k_i}{f}(\vec p)  \\
& = & \sum\limits_i^{n=4} \frac{\vec k_i (f(\vec p + h\vec k_i ) - f(\vec p))}{h} \\
& = & \frac{\vec k_1 (f(\vec p + h\vec k_1 ) - f(\vec p))}{h} + \frac{\vec k_2 (f(\vec p + h\vec k_2 ) - f(\vec p))}{h} + \frac{\vec k_3 (f(\vec p + h\vec k_3 ) - f(\vec p))}{h} + \frac{\vec k_4 (f(\vec p + h\vec k_4 ) - f(\vec p))}{h}\\
& \phantom{=} &
\\
& = & \small \frac{1}{h} \left (
\begin{array}{r}
k_1\phantom{(f(\vec p + h\vec k_1 ) - f(\vec p))}\\
1(f(\vec p + h\vec k_1 ) - f(\vec p)) \\
-1 (f(\vec p + h\vec k_1 ) - f(\vec p))\\
-1 (f(\vec p + h\vec k_1 ) - f(\vec p))
\end{array}
+
\begin{array}{r}
k_2\phantom{(f(\vec p + h\vec k_2 ) - f(\vec p))}\\
-1 (f(\vec p + h\vec k_2 ) - f(\vec p) ) \\
-1 (f(\vec p + h\vec k_2 ) - f(\vec p) ) \\
 1 (f(\vec p + h\vec k_2 ) - f(\vec p) )
\end{array}
+
\begin{array}{r}
k_3\phantom{(f(\vec p + h\vec k_3 ) - f(\vec p))}\\
-1 (f(\vec p + h\vec k_3 ) - f(\vec p) )\\
1 (f(\vec p + h\vec k_3 ) - f(\vec p) )\\
-1 (f(\vec p + h\vec k_3 ) - f(\vec p) )
\end{array}
+
\begin{array}{r}
k_4\phantom{(f(\vec p + h\vec k_4 ) - f(\vec p))}\\
1 (f(\vec p + h\vec k_4 ) - f(\vec p)) \\
1 (f(\vec p + h\vec k_4 ) - f(\vec p)) \\
1 (f(\vec p + h\vec k_4 ) - f(\vec p))
\end{array} \right )
\begin{array}{r}
\phantom{k_i}\\
.[x]\\
.[y]\\
.[z]
\end{array}
\\
& \phantom{=} &
\\
& = & \frac{1}{h} \left (
\begin{array}{r}
 f(\vec p + h\vec k_1 ) - f(\vec p) \\
-f(\vec p + h\vec k_1 ) + f(\vec p)\\
-f(\vec p + h\vec k_1 ) + f(\vec p)
\end{array}

\begin{array}{r}
-f(\vec p + h\vec k_2 ) + f(\vec p)  \\
-f(\vec p + h\vec k_2 ) + f(\vec p)  \\
 f(\vec p + h\vec k_2 ) - f(\vec p)
\end{array}

\begin{array}{r}
-f(\vec p + h\vec k_3 ) + f(\vec p)\\
f(\vec p + h\vec k_3 ) - f(\vec p)\\
-f(\vec p + h\vec k_3 ) + f(\vec p)
\end{array}

\begin{array}{r}
f(\vec p + h\vec k_4 ) - f(\vec p) \\
f(\vec p + h\vec k_4 ) - f(\vec p) \\
f(\vec p + h\vec k_4 ) - f(\vec p)
\end{array} \right )\\
& \phantom{=} &
\\
& = & \sum\limits_i^{n=4} \vec k_i f(\vec p + h\vec k_i ) / h  \\
\end{array}$$

Nice, when using finite difference \( \overrightarrow{m} \) does simplify to only four evaluations of \( f\). Note that if you normalize you don't even need to divide by \( h \). Now how does \( \overrightarrow{m} \) relates to the gradient \( \nabla f \)?

$$
\begin{array}{lll}
\overrightarrow{ m } & = & \sum\limits_i^{n=4} {\vec k_i \nabla_{\vec k_i}{f}(\vec p)} \\
& = & \sum\limits_i^{n=4}{ \vec k_i (\vec k_i \cdot \nabla f(\vec p)) } \text{ the operator }(\phantom{a}\cdot\phantom{a})\text{ represents the dot product} \\
& \phantom{=} & \\
& = & \sum\limits_i^{n=4}
\begin{matrix}
\vec k_{ix} \phantom{-}(\vec k_i \cdot \nabla f(\vec p)) \\
\vec k_{iy} \phantom{-}(\vec k_i \cdot \nabla f(\vec p)) \\
\vec k_{iz} \phantom{-}(\vec k_i \cdot \nabla f(\vec p)) \\
\end{matrix} \text{ exposing the components} \\
& \phantom{=} & \\
& = & \sum\limits_i^{n=4}
\begin{matrix}
(\vec k_{ix}  \phantom{*} \vec k_i) \cdot \nabla f(\vec p) \\
(\vec k_{iy}  \phantom{*} \vec k_i) \cdot \nabla f(\vec p) \\
(\vec k_{iz}  \phantom{*} \vec k_i) \cdot \nabla f(\vec p) \\
\end{matrix} \text{ the dot product is associative with respect to scalar multiplication }  \\
& \phantom{=} & \\
& = &
\begin{matrix}
\sum\limits_i^{n=4} (\vec k_{ix}  \phantom{*} \vec k_i) \cdot \nabla f(\vec p) \\
\sum\limits_i^{n=4} (\vec k_{iy}  \phantom{*} \vec k_i) \cdot \nabla f(\vec p) \\
\sum\limits_i^{n=4} (\vec k_{iz}  \phantom{*} \vec k_i) \cdot \nabla f(\vec p) \\
\end{matrix} \\
& \phantom{=} & \\
& = &
\begin{matrix}
\nabla f(\vec p) \cdot \sum\limits_i^{n=4} (\vec k_{ix}  \phantom{*} \vec k_i) \\
\nabla f(\vec p) \cdot \sum\limits_i^{n=4} (\vec k_{iy}  \phantom{*} \vec k_i) \\
\nabla f(\vec p) \cdot \sum\limits_i^{n=4} (\vec k_{iz}  \phantom{*} \vec k_i) \\
\end{matrix} \text{ the gradient is constant (does not depend on i) so we can take it out}  \\
& \phantom{=} & \\
& = &
\begin{matrix}
\nabla f(\vec p) \cdot (4,0,0) \\
\nabla f(\vec p) \cdot (0,4,0) \\
\nabla f(\vec p) \cdot (0,0,4) \\
\end{matrix} \text{ now this is just black magic ^^}  \\
& \phantom{=} & \\
& = & 4 \nabla f(\vec p)
\end{array}
$$

And here you have it, the gradient scaled by a factor of 4.

No comments

All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.
Spam bot question: