Skin weight optimization (main)

ARAP LBS paper
SR ARAP


Linear Blending Skinning (LBS)

$$ \begin{equation*}
\bar{\mathbf{p_i}} = \sum_{j=1}^{n_i} w_{ij}  T_j  \mathbf{p_i}
\end{equation*} $$

Let \( \mathbf s_i:\mathbb R^{n_i} \rightarrow \mathbb R^3\) be the skinning function that takes in input:

And outputs:

$$ \begin{equation*}
\mathbf s_i(\vec w_i) = \sum_{j=1}^{n_i} w_{ij}  T_j  \mathbf{p_i}
\end{equation*} $$

LBS matrix form

Let's express the skinning of every vertices as combination of the matrix \( \mathbf S \in \mathbb R^{|\mathbf p| \times |\vec w|} \) and the vector \( \vec w \in \mathbb R^m \) such as:

$$ \bar{\mathbf p} = \mathbf S \vec w $$

The vector \( \vec w\) is the concatenation of the sparse weights \( \vec w_i \in \mathbb R^{n_i} \):

$$
\vec w = \left [ \begin{matrix} \vec w_{1} & \vec w_{2} & \vec w_{3} & \dots & \vec w_{|\mathbf p|} \end{matrix} \right ]^T
$$

Let's take a look at a concrete example:

$$
\vec w =  \left [ \begin{matrix} w_{13} & \dots & w_{15} & w_{21} & w_{23} & w_{35} & \dots  & w_{n3} \end{matrix} \right ]^T
$$

The skinning matrix is sparse, each line correspond to the skinning of a single vertex \( \mathbf p_i \):

$$
\mathbf S =
\underbrace{ \begin{Bmatrix}
[T_3p_1 & \cdots & T_5p_1] &           0  &  0           &  0         & \cdots & 0 \\
0         &    \cdots &        0  &  [T_1 p_2 & T_3 p_2]   &  0         & \cdots & 0 \\
0         &   \cdots  &        0 &           0  &          0    & [T_5 p_3] &   \cdots       & 0 \\
\vdots  &  \vdots &  \vdots &     \vdots &     \vdots   &               0  &   \ddots &        0  \\
0          &   \cdots  &        0  &            0  &          0   &          0   &         0  &  [T_3 p_n] \\ 
\end{Bmatrix} }_{\text{number of weights (sparse representation)}} \text{nb vertices}
$$

For a single vertex:

$$ \bar{\mathbf p_i} = \mathbf S_i \vec w_i $$

LBS gradient

Consequently \( \nabla \mathbf s_i : \mathbb R^{n_i} \rightarrow \mathbb R^{{n_i} \times 3} \) (which is the transpose of the Jacobian of \( \mathbf s_i \)):

$$ \begin{array}{lll}
\mathbf J( s_i)^T = \nabla s_i = \nabla \left [ \mathbf s_i(\vec w_i) \right  ] & = & \nabla \left [ \sum_{j=1}^{n} w_{ij}  T_j  \mathbf{p_i} \right  ] \\
& = & \sum_{j=1}^{n} \nabla \left [ w_{ij}  T_j  \mathbf{p_i} \right  ] \\
& = & \nabla \left [ w_{i1}  T_1  \mathbf{p_i} \right  ] + \cdots + \nabla \left [ w_{in}  T_n  \mathbf{p_i} \right  ] \\
& = & \left [ \begin{matrix} \frac{ \partial ( w_{i1}  T_1  \mathbf{p_i}) }{\partial w_{i1} } \\ \cdots \\ \frac{ \partial ( w_{i1}  T_1  \mathbf{p_i}) }{\partial w_{in} } \end{matrix} \right ] + \cdots + \left [ \begin{matrix} \frac{ \partial ( w_{in}  T_n  \mathbf{p_i}) }{\partial w_{i1} } \\ \cdots \\ \frac{ \partial ( w_{in}  T_n  \mathbf{p_i}) }{\partial w_{in} } \end{matrix} \right ] \\
& = & \left [ \begin{matrix} T_1  \mathbf{p_i} \\ 0 \\ 0 \end{matrix} \right ] + \cdots + \left [ \begin{matrix} 0 \\ 0 \\ T_n  \mathbf{p_i} \end{matrix} \right ] \\
& = & \left [ \begin{matrix} T_1  \mathbf{p_i} \\ \vdots \\ T_n  \mathbf{p_i} \end{matrix} \right ]
\end{array} $$

Weight convexity energy

$$ E(\vec w) = \sum_{i=1}^v {  \left ( 1 - \sum_{j=1}^{n_i} w_{ij}  \right )^2  } $$

Note that one problem with this energy term is that it's not the same scale as Arap energy so we should scale it.


Weight convexity Gradient

$$
\begin{array}{lll}
\nabla_{\bar{p_i}} E & =& \nabla \left [ \sum\limits_{i=1}^v {  \left( 1 - \sum\limits_{j=1}^{n_i} w_{ij}  \right)^2  } \right ] \\

& =& \sum\limits_{i=1}^v {  \nabla \left [ \left( 1 - \sum\limits_{j=1}^{n_i} w_{ij}  \right)^2 \right ] } \\
& =& \nabla \left [ \left( 1 - \sum\limits_{j=1}^{n_i} w_{ij}  \right)^2 \right ] \text{ gradient null for vertices other than }p_i \\
\end{array}
$$

Consider the chain rule \( \nabla \left [ s( f(\vec {x}) ) \right ] = s'( f(\vec {x}) ) \nabla f(\vec {x}) \) with \(s(x) = x^2 \) and \( f(\vec w_i) = 1 - \sum\limits_{j=1}^{n_i} w_{ij} \)

$$
\begin{array}{lll}
\nabla_{\bar{p_i}} E & =& {  2(1 - \sum\limits_{j=1}^{n_i} w_{ij}) \nabla \left [ 1 - \sum\limits_{j=1}^{n_i} w_{ij} \right ] } \\
& =& {  2(1 - \sum\limits_{j=1}^{n_i} w_{ij}) \left ( 0 - \nabla \left [ \sum\limits_{j=1}^{n_i} w_{ij} \right ] \right ) } \\
& =& -2 {  (1 - \sum\limits_{j=1}^{n_i} w_{ij}) \nabla \left [ \sum\limits_{j=1}^{n_i} w_{ij} \right ] } \\
& =& -2
    (1 - \sum\limits_{j=1}^{n_i} { w_{ij} } ) 
   \left [ \begin{matrix}
        \frac{ \partial ( w_{i1} + \cdots +  w_{in} ) }{\partial w_{i1} } \\
        \vdots \\ 
        \frac{ \partial ( w_{i1} + \cdots +  w_{in} ) }{\partial w_{in} }
    \end{matrix} \right ]
\\
& =& -2 (1 - \sum\limits_{j=1}^{n_i} { w_{ij} } )
    \left [ \begin{matrix}
        1 \\
        \vdots \\ 
        1
    \end{matrix} \right ]
\end{array}
$$

If you draw the 2D case you can intuitively see that \( -\nabla E \) re-projects the skin weights to the nearest normalized version of those weights.

Weight convexity direct solving

$$
\begin{array}{lll}
\nabla_{\bar{p_i}} E & = & 0 \\
-2 (1 - \sum\limits_{j=1}^{n_i} { w_{ij} } ) \left [ \begin{matrix}  1 \\  \vdots \\  1 \end{matrix} \right ] & = & 0 \\
(-2 + 2\sum\limits_{j=1}^{n_i} { w_{ij} } ) \vec{1} & = & 0 \\
2\sum\limits_{j=1}^{n_i} { w_{ij} } \vec{1} & = & \vec{2} \\
\sum\limits_{j=1}^{n_i} { w_{ij} } \vec{1} & = & \vec{1} \\
\vec{1} . \sum\limits_{j=1}^{n_i} { w_{ij} } & = & \vec{b}_i \\
\end{array}
$$

Let's describe our problem with matrices:

$$ \mathbf A \mathbf x = \mathbf b $$

Dimensions:

\( \mathbf M_i \vec{w}_i = \vec{b}_i \) with \( \mathbf M_i =
\begin{bmatrix}
1 & \cdots & 1  \\
\vdots  & \ddots & \vdots \\
1 & \cdots & 1   \\
\end{bmatrix}
\) and \( \vec{b}_i = \left [ \begin{matrix}  1 \\  \vdots \\  1 \end{matrix} \right ] \)

$$
\begin{bmatrix}
\mathbf M_0 & 0        & 0                 & 0        & 0  \\
0                 & \ddots & 0                 & 0        & 0  \\
0                 & 0        & \mathbf M_i  & 0        & 0 \\
0                 & 0        & 0                 & \ddots & 0 \\
0                 & 0        & 0                 & 0        & M_v \\
\end{bmatrix}
\begin{bmatrix}
\vec w_0 \\
\vdots \\
\vec w_i \\
\vdots \\
\vec w_j \\
\vdots \\
\vec w_v \\
\end{bmatrix}
=
\begin{bmatrix}
\vec b_0 \\
\vdots \\
\vec b_i \\
\vdots \\
\vec b_j \\
\vdots \\
\vec b_v
\end{bmatrix}
$$

In other word we can add a constant \( a = 1 \) to each block \(M_i\) on the diagonal and the right hand side \( \vec b \). The constant can be changed, e.g. we can set \( a = 0.5 \) to reduce by half the influence of this energy term.

Positional energy

$$
E( \vec w ) = \sum_{i=1}^v { \| s_i(\vec w_i) - \mathbf p'_i \|^2  }
$$

Where \( \mathbf p'_i \) is some user prescribed position corresponding to the current set of skinning transformations \( T_j \). You can have as much energy terms as examples poses.

Gradient

Let define \(  \vec m_i(\vec w_i) :\mathbb R^{n_i} \rightarrow \mathbb R^3 \) (input: skin weights, output: 3d point) as \( \vec m_i = s_i( \vec w_i ) - \mathbf p'_i\)

$$
\begin{array}{lll}
\nabla \left [ \| \vec m_i \|^2  \right ] & = & \nabla [\vec m_i(\vec w_i)] . 2 .  \vec m \\
& = & \nabla [ s_i(\vec w_i) - \mathbf p'_i] . 2( s_i(\vec w_i) - \mathbf p'_i) \\
& = & 2\nabla s_i (\mathbf S_i \vec w_i - \mathbf p'_i)
\end{array}
$$
Then the full expression of the gradient is:

$$
\begin{array}{lll}
\nabla_{\bar{p_i}} E & = & \nabla \left [ \sum_{i=1}^v {   \| \vec m_i \|^2   }  \right ] \\
                               & = & \nabla \left [ \| \vec m_i \|^2  \right ] \\
                               & = & 2\nabla s_i (\mathbf S_i \vec w_i - \mathbf p'_i) \text{ gradient is null for vertices other than }p_i
\end{array}
$$

Direct solving

$$
\begin{array}{lll}
\nabla_{\bar{p_i}} E & = & 0 \\
2\nabla s_i (\mathbf S_i \vec w_i - \mathbf p'_i) & = & 0 \\
\nabla s_i (\mathbf S_i \vec w_i - \mathbf p'_i) & = & 0 \\
\nabla s_i \mathbf S_i \vec w_i & = & \nabla s_i \mathbf p'_i \\
M_i \vec w_i & = & \vec b_i \\
\end{array}
$$

In other words, you can add \( M_i \) to each diagonal block elements and \( \vec b_i \) to add some energy terms that steer towards prescribed positions \( p'_i \). To take into account multiple poses just add several times \( M_i \) and \( b_i \) (The values of \( M_i \) depends on the chosen pose).

Distance energy

$$
E( \vec w ) = \sum_{i=1}^v { d_i^2 - \| s_i(\vec w_i) - \mathbf p'_i \|^2  }
$$

Where \( \mathbf p'_i \) is some user prescribed position and \( d_i \) prescribed distance. This enables to set a rigid link between a vertex and some anchor \( \mathbf p'_i \) at distance \( d_i \). Typically \( \mathbf p'_i \) is attached to a bone and thus depends on the current set of skinning transformations \( T_j \). You can add this energy term for each examples poses.

Arap energy

Recall the ARAP energy \( E: \mathbb R^{v} \rightarrow \mathbb R\):

$$ E(\mathcal S') = \sum_{i=1}^v { c_i  \sum_{j \in \mathcal N(i)} c_{ij} { \| (\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \mathbf R_i ( \mathbf p_i - \mathbf p_j) \|^2  } } $$

With \( c_i = 1\) and \( c_{ij} = \frac{1}{2} { ( cot(\alpha_{ij}) + cot(\beta_{ij}}) ) \) the cotangent weights

We now express it in terms of the skinning weights by injecting the skinning function, \( E: \mathbb R^{|\vec w|} \rightarrow \mathbb R\):

$$ E( \vec w ) = \sum_{i=1}^v { c_i  \sum_{j \in \mathcal N(i)} c_{ij} { \| (\mathbf s_i(\vec w_i) - \mathbf s_j(\vec w_j)) - \mathbf R_i ( \mathbf p_i - \mathbf p_j) \|^2  } } $$

Arap gradient

Let's develop the gradient of the ARAP energy according to \( \bar{\mathbf p_i} \). We keep in mind that ultimately we differentiate for the weights \( \vec w_i \in \mathbb R^{n_i} \) and \( \bar{\mathbf p_i} \) is obtain through the skinning function \( \mathbf s_i:\mathbb R^{n_i} \rightarrow \mathbb R^3\) while \( \mathbf p_i\) are prescribed vertices in rest pose (constant):

$$
\begin{array}{lll}
\nabla_{\bar{p_i}} E & =& \nabla \left [ \sum\limits_{i=1}^v \phantom{c} \sum\limits_{j \in \mathcal N(i)} c_{ij} { \| (\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \mathbf R_i ( \mathbf p_i - \mathbf p_j) \|^2  }  \right ] \\

& = & \nabla \left [ \sum\limits_{i=1}^v \phantom{c} \sum\limits_{j \in \mathcal N(i)} c_{ij} { \| \vec m_{ij} \|^2  }  \right ] \text{with } \vec m(\vec w_i):\mathbb R^n \rightarrow \mathbb R^3  \text{ input: skin weights, output: 3d point}\\

& =& \nabla \left [ \sum\limits_{j \in \mathcal N(0)} c_{0j} { \| \vec m_{0j} \|^2  }   + \cdots + \sum\limits_{j \in \mathcal N(n)} c_{nj} { \| \vec m_{nj} \|^2  }  \right ] \\

& = & \nabla \left [ \sum\limits_{j \in \mathcal N(i)} c_{ij} { \| \vec m_{ij} \|^2  } \right ] + \nabla \left [ \sum\limits_{j \in \mathcal N(i)} c_{ji} { \| \vec m_{ji} \|^2  } \right ]  \text{ every edges that do not contain }\bar{\mathbf p_i} \text{vanishes} \\

& = & \sum\limits_{j \in \mathcal N(i)} c_{ij} { \nabla \left [ \| \vec m_{ij} \|^2  \right ] }  + \sum\limits_{j \in \mathcal N(i)} c_{ij} { \nabla \left [ \| \vec m_{ji} \|^2  \right ] } \text{ notice } c_{ij} = c_{ji} \\
\end{array}
$$

To develop \( \nabla \left [ \| \vec m \|^2  \right ] \) you need to use \( \nabla \left [ \| \mathbf x \|^2  \right ] = 2.\mathbf x \) and the chain rule of \(\nabla \left [ f(g)  \right ] = \mathbf J[g]^T \nabla f(g) \) with \( f=\| \|^2\) and \( g = \vec m \) (c.f. gradient rules) which gives you:

$$
\nabla \left [ \| \vec m \|^2  \right ] = \mathbf J(\vec m)^T . 2 .  \vec m
$$

Let's take a look at the Jacobian \( J(\vec m)^T \in \mathbb R^{3 \times n}\):

$$\begin{array}{lll}
J(\vec m)^T & = & 
\left [
\begin{matrix}
\frac{\partial m_x}{\partial w_{i1}} & \frac{\partial m_y}{\partial w_{i1}}  & \frac{\partial m_z}{\partial w_{i1}} \\
\vdots & \vdots & \vdots\\
\frac{\partial m_x}{\partial w_{in}} & \frac{\partial m_y}{\partial w_{in}}  & \frac{\partial m_z}{\partial w_{in}} \end{matrix} \right ] \\
& = & \left [ \begin{matrix} \nabla[m_x]^T & \nabla[m_y]^T & \nabla[m_z]^T  \end{matrix} \right ] \\
& = & \nabla[m(\vec w_i)]
\end{array}$$

We develop for the edge \( (\bar{\mathbf p_i} - \bar{\mathbf p_j}) \):

$$
\begin{array}{lll}
\nabla[m_{ij}(\vec w_i)] & = & \nabla \left [ (\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \mathbf R_i ( \mathbf p_i - \mathbf p_j) \right ] \\
& = & \nabla \left [ ( \mathbf s_i(\vec w_i) - \mathbf s_j(\vec w_j)) - \mathbf R_i (\mathbf p_i - \mathbf p_j) \right ]
\end{array}
$$

Note that \(\mathbf p_i\) and \(\mathbf p_j\) do not depend on the skinning weights \( \vec w_i \), they are prescribed position (rest pose or sculpted by the user). They actually depend on the pose of the skeleton \( \mathbf T \). Constant terms vanishes and so:

$$
\begin{array}{lll}
\nabla[m_{ij}(\vec w_i)] & = &  \nabla \left [ \mathbf s_i(\vec w_i) \right ] \\
& = & \left [ \begin{matrix} T_1  \mathbf{p_i} \\ \vdots \\ T_n  \mathbf{p_i} \end{matrix} \right ] \text{ as we previously developed earlier}
\end{array}
$$

Similarly we have for the opposite edge:

$$
\nabla[m_{ji}(\vec w_i)] = -\nabla \left [ \mathbf s_i(\vec w_i) \right ]
$$

Going back to the overall gradient of the ARAP energy function:

$$
\begin{array}{lll}
\nabla_{\bar{p_i}} E & = & \sum\limits_{j \in \mathcal N(i)} c_{ij} { \mathbf J(\vec m_{ij})^T 2  \vec m_{ij} }  + \sum\limits_{j \in \mathcal N(i)} c_{ij} { \mathbf J(\vec m_{ji})^T 2  \vec m_{ji} } \\
& = & \sum\limits_{j \in \mathcal N(i)} c_{ij} { \nabla \left [ \mathbf s_i(\vec w_i) \right ] 2  \vec m_{ij} }  + \sum\limits_{j \in \mathcal N(i)} c_{ij} { {-\nabla} \left [ \mathbf s_i(\vec w_i) \right ] 2  \vec m_{ji} } \\
& = & \sum\limits_{j \in \mathcal N(i)} 2c_{ij} { \nabla \left [ \mathbf s_i(\vec w_i) \right ] (\vec m_{ij}   -    \vec m_{ji}) } \\
& = & \sum\limits_{j \in \mathcal N(i)} 2c_{ij} { \nabla \mathbf s_i ((\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \mathbf R_i ( \mathbf p_i - \mathbf p_j)  - ((\bar{\mathbf p_j} - \bar{\mathbf p_i}) - \mathbf R_j ( \mathbf p_j - \mathbf p_i))) } \\
& = & \sum\limits_{j \in \mathcal N(i)} 2c_{ij} { \nabla \mathbf s_i ((\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \mathbf R_i ( \mathbf p_i - \mathbf p_j)  - (\bar{\mathbf p_j} - \bar{\mathbf p_i}) + \mathbf R_j ( \mathbf p_j - \mathbf p_i)) } \\
& = & \sum\limits_{j \in \mathcal N(i)} 2c_{ij} { \nabla \mathbf s_i ((\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \mathbf R_i ( \mathbf p_i - \mathbf p_j)  + (\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \mathbf R_j ( \mathbf p_i - \mathbf p_j)) } \\
& = & \sum\limits_{j \in \mathcal N(i)} 2c_{ij} { \nabla \mathbf s_i ((\bar{\mathbf p_i} - \bar{\mathbf p_j}) (- \mathbf R_i ( \mathbf p_i - \mathbf p_j) - \mathbf R_j ( \mathbf p_i - \mathbf p_j))) } \\
& = & \sum\limits_{j \in \mathcal N(i)} 2c_{ij} { \nabla \mathbf s_i (2(\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \mathbf R_i ( \mathbf p_i - \mathbf p_j) - \mathbf R_j ( \mathbf p_i - \mathbf p_j)) } \\
& = & \sum\limits_{j \in \mathcal N(i)} 2c_{ij} { \nabla \mathbf s_i (2(\bar{\mathbf p_i} - \bar{\mathbf p_j}) - (\mathbf R_i + \mathbf R_j) ( \mathbf p_i - \mathbf p_j)) } \\
& = & \sum\limits_{j \in \mathcal N(i)} 4c_{ij} { \nabla \mathbf s_i ((\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \frac{1}{2}(\mathbf R_i + \mathbf R_j) ( \mathbf p_i - \mathbf p_j)) } \\
& = & \nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} 4c_{ij} { ((\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \frac{1}{2}(\mathbf R_i + \mathbf R_j) ( \mathbf p_i - \mathbf p_j)) }
\end{array}
$$

So for a gradient descent you can use:

$$
\nabla_{\vec{w_i}} E(\vec w) = \nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} 4c_{ij} { ((\mathbf s_i(\vec w) - \mathbf s_j(\vec w)) - \frac{1}{2}(\mathbf R_i + \mathbf R_j) ( \mathbf p_i - \mathbf p_j)) }
$$

Directional derivatives

I notice keeping weights normalized by interleaving gradient descent steps and weight normalization is unstable. But you can use directional derivatives to preserve weight normalization during the gradient descent, instead of walking in the gradient direction you simply project \( \vec w_i \) on vectors that guarantee normalization is preserved:

$$
\begin{array}{lll}
g_1 = \nabla_{\mathbf v_1}E(\vec w_i) & = & \nabla E(\vec w_i) . \mathbf v_1 & = & \lim_{h \rightarrow 0}{\frac{E(\vec w_i + h\mathbf{v_1}) - E(\vec{w_i})}{h}} \\
\vdots                           & = & \vdots \\
g_n = \nabla_{\mathbf v_n}E(\vec w_i) & = & \nabla E(\vec w_i) . \mathbf v_n \\
\end{array}
$$

With:

$$
\begin{array}{lll}
\mathbf v_1 & = & ( 1 , -1/(| \vec w_i |-1), -1/(| \vec w_i |-1), \cdots, -1/(| \vec w_i |-1) ) \\
\vdots                           & = & \vdots \\
\mathbf v_n & = &  (-1/(| \vec w_i |-1), -1/(| \vec w_i |-1), \cdots, -1/(| \vec w_i |-1), 1 ) \\
\end{array}
$$

Where \( | \vec w_i | \) the number of influence joints for the vertex \(  i \). Then you walk along the vector \( \left[ g_1, \cdots, g_n\right]\) instead of \( \nabla E_{\vec w_i}\).

Direct optimization

Let's seek the minimum energy point with \( \nabla_{\bar{p_i}} E = 0 \):

$$
\begin{array}{lll}
\nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} 4c_{ij} { ((\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \frac{1}{2}(\mathbf R_i + \mathbf R_j) ( \mathbf p_i - \mathbf p_j)) } & = & 0\\
\nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} c_{ij} {((\bar{\mathbf p_i} - \bar{\mathbf p_j}) - \frac{1}{2}(\mathbf R_i + \mathbf R_j) ( \mathbf p_i - \mathbf p_j)) } & = & 0\\
\nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} c_{ij}  {(\bar{\mathbf p_i} - \bar{\mathbf p_j}) } - \nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)}  { \frac{c_{ij}}{2}(\mathbf R_i + \mathbf R_j) ( \mathbf p_i - \mathbf p_j) } & = & 0\\
\end{array}
$$

Note that in the case of a linear deformer like linear blending the \( \nabla \mathbf s_i \) matrix is a constant, however, it is a non-invertible rectangular matrix therefore cannot be cancelled out.

$$
\begin{array}{lll}
\nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} c_{ij}  { (\bar{\mathbf p_i} - \bar{\mathbf p_j}) }  & = &  \nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} { \frac{c_{ij}}{2}(\mathbf R_i + \mathbf R_j) ( \mathbf p_i - \mathbf p_j) }\\
 \sum\limits_{j \in \mathcal N(i)} c_{ij}  { \nabla \mathbf s_i (\mathbf S_i \vec w_i - \mathbf S_j \vec w_j) }  & = &  \vec b_i \\
\sum\limits_{j \in \mathcal N(i)} (c_{ij} \nabla \mathbf s_i \mathbf S_i) \vec w_i  - \sum\limits_{j \in \mathcal N(i)} (c_{ij} \nabla \mathbf s_i \mathbf S_j) \vec w_j  & = &  \vec b_i \\
\end{array}
$$

From now on the goal is to describe our problem with matrices:

$$ \mathbf A \mathbf x = \mathbf b $$

Matrix form first attempt

$$
\sum\limits_{j \in \mathcal N(i)} (c_{ij} \nabla \mathbf s_i \mathbf S_i) \vec w_i  - \sum\limits_{j \in \mathcal N(i)} (c_{ij} \nabla \mathbf s_i \mathbf S_j) \vec w_j  =  \vec b_i
$$

Separate page notes

Matrix form second attempt

$$
\nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} (c_{ij} \mathbf S_i) \vec w_i  - \nabla \mathbf s_i \sum\limits_{j \in \mathcal N(i)} (c_{ij} \mathbf S_j) \vec w_j  =  \vec b_i
$$

Separate page notes

Lagrange multipliers

Separate page notes

No comments

(optional field, I won't disclose or spam but it's necessary to notify you if I respond to your comment)
All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.
Anti-spam question: