To get started: Clone this repository using
git clone --recursive http://github.com/alecjacobson/computer-graphics-mass-spring-systems.git
In this assignment we'll consider animating a deformable shape.
We model the shape's physical behavior by treating it as a network of point masses and springs. We can think of our shape as a graph where each vertex is a point mass and each edge is a spring.
Given initial conditions (each point's starting position and starting velocity, if any) we will create an animation following the laws of physics forward in time. In the real world, physics is deterministic: if we know the current state, we can be sure of what the next state will be (at least at the scales we're considering). This will also be true of our physical simulation.
The law that we start with is Newton's second law, which states that the forces
$\mathbf{f} \in \mathbf{R}^3 $ acting on a body must equal its mass
Notice that
Personifying physical objects, we say that they are at rest when their potential energy is zero. When the object is not at rest then it exerts a force pushing it toward its rest state (elastic force), decreasing its potential energy as fast as possible. The force is the negative gradient of the potential energy.
A simple spring is defined by its stiffness
$$ V(\mathbf{p}_i,\mathbf{p}_j) = \frac12 k( | \mathbf{p}_i - \mathbf{p}j| - r{ij} )^2. $$
The force exerted by the spring on each mass is the partial
derivative of the potential
energy
For now, we can postpone expanding
Our problem is to determine where all of the mass will be after a small
duration in time (
Question: What is a reasonable choice for the value of
$\Delta t$ ?Hint: 🎞️ or 🖥️
We'll assume we know the current positions for each
mass
In the real world, the trajectory of an object follows a continuous curve as a
function of time. In our simulation, we only need to know the position of each
pass at discrete moments in
time. We use
this to build discrete approximation of the time derivatives (velocities and
accelerations) that we encounter. Immediately, we can replace the current
velocties
where
We can also use a central finite difference to define the acceleration at time
This expression mentions our unknown variables
In the equation
Question: We've chosen to define
$\mathbf{f}$ as the forces that implicitly depend on the unknown positions$\mathbf{p}^{t+\Delta t}$ at the end of the time step$t+\Delta t$ . What would happen if we defined the forces to explicitly depend on the (known) current positions$\mathbf{p}^t$ ?
An alternative is to view physics simulation as an optimization problem. We
will define an energy that will be minimized by the value of
$$ \mathbf{p}^{t+\Delta t} = \mathop{\text{argmin}}\mathbf{p} \underbrace{ \left(\sum\limits{ij} \frac12 k( | \mathbf{p}_i-\mathbf{p}j| - r{ij})^2\right) - \Delta t^2 \left(\sum\limits_i m_i \left(\frac{\mathbf{p}_i - 2 \mathbf{p}^{t}_i + \mathbf{p}_i^{t-\Delta t}}{\Delta t^2 }\right)^2 \right) - \left(\sum\limits_i \mathbf{p}_i^\top \mathbf{f}^\text{ext}i \right) }{E(\mathbf{p})} $$
Keen observers will identify that the first term is potential energy and the second term resembles kinetic energy. Intuitively, we can see the first term as trying to return the spring to rest length (elasticity) and the second term as trying to keep masses moving in the same direction.
Because of the $| \mathbf{p}_i-\mathbf{p}j| - r{ij}$ term, minimizing
In a relatively recent SIGGRAPH paper "Fast Simulation of Mass-Spring
Systems",
Tiantian Liu et al. made a neat observation that makes designing an algorithm to
minimize
$$ (| \mathbf{p}i - \mathbf{p}j| - r{ij})^2 = \mathop{\text{min}}{\mathbf{d}{ij}\in \mathbf{R}^3,| \mathbf{d}| = r{ij}} | (\mathbf{p}_i - \mathbf{p}j) - \mathbf{d}{ij}| ^2. $$
It may seem like we've just created extra work. We took a closed-form expression
(left) and replaced it with an optimization problem (right). Yet this
optimization problem is small ($\mathbf{d}{ij}$ is a single 3D vector) and can be
easily solved independently (and even in parallel) for each spring (i.e.,
$\mathbf{d}{ij}$ doesn't depend on $\mathbf{d}{\ell k}$ etc.). Reading the right-hand side in
English it says, find the vector of length $r{ij}$ that is as close as possible
to the current spring vector
Now, suppose we somehow knew already the vector $\mathbf{d}{ij}$ corresponding to the unknown optimal solution $\mathbf{p}^{t+\Delta t}$, then treating $\mathbf{d}{ij}$ as a constant we could find the optimal solution by solving the quadratic optimization problem:
$$ \mathbf{p}^{t+\Delta t} = \mathop{\text{argmin}}\mathbf{p} \underbrace{ \left(\sum\limits{ij} \frac12 k| (\mathbf{p}_i-\mathbf{p}j) - \mathbf{d}{ij}| ^2\right) - \Delta t^2 \left(\sum\limits_i m_i \left(\frac{\mathbf{p}_i - 2 \mathbf{p}^{t}_i + \mathbf{p}_i^{t-\Delta t}}{\Delta t^2 }\right)^2 \right) - \left(\sum\limits_i \mathbf{p}_i^\top \mathbf{f}^\text{ext}i \right) }{\tilde{E}(\mathbf{p})}. $$
The modified energy
This leads to a straightforward "local-global" iterative algorithm:
- Step 1 (local): Given current values of
$\mathbf{p}$ determine$\mathbf{d}_{ij}$ for each spring. - Step 2 (global): Given all
$\mathbf{d}_{ij}$ vectors, find positions$\mathbf{p}$ that minimize quadratic energy$\tilde{E}$ . - Step 3: if "not satisfied", go to Step 1.
For the purposes of this assignment we will assume that we're "satisfied" after a fixed number of iterations (e.g., 50). More advanced stopping criteria could (should) be employed in general.
The subtext of this assignment is understanding the computational aspects of large matrices. In the algorithm above, Step 1 is easy and relies on "local" information for each spring.
Step 2 on the otherhand involves all springs simultaneously.
Matrices are our
convenient notation for representing both the linear
operators (e.g., in the equation
Let's begin by being precise about some notation. We will stack up all of the
We can then express the inertial term using matrices: $$ \begin{align*} \Delta t^2 \left(\sum\limits_i m_i \left(\frac{\mathbf{p}_i - 2 \mathbf{p}^{t}_i - \mathbf{p}_i^{t-\Delta t}}{\Delta t^2 }\right)^2 \right) &= \frac{1}{\Delta t^2} \left(\sum\limits_i \left(\mathbf{p}_i - 2 \mathbf{p}^{t}_i - \mathbf{p}_i^{t-\Delta t}\right)^\top m_i \left(\mathbf{p}_i - 2 \mathbf{p}^{t}_i - \mathbf{p}_i^{t-\Delta t}\right) \right) \ &= \frac{1}{\Delta t^2} \mathop{\text{tr}}{ \left(\mathbf{p} - 2\mathbf{p}^{t} + \mathbf{p}^{t-\Delta t}\right)^\top \mathbf{M} \left(\mathbf{p} - 2\mathbf{p}^{t} + \mathbf{p}^{t-\Delta t}\right) }, \end{align*} $$
where
and the entries of the square matrix
$$\mathbf{M}{ij} = \begin{cases} m{i} & \text{ if
The potential energy term can be similarly written with matrices. We'll start by
introducing the signed incidence matrix of our mass-psring network of
Using this matrix
We can now write the modified potential energy of
$$ \left(\sum\limits_{ij} \frac12 k| (\mathbf{p}_i-\mathbf{p}j) - \mathbf{d}{ij}| ^2\right) = \\ \frac{k}{2} \mathop\text{tr}{\left((\mathbf{A} \mathbf{p} - \mathbf{d})^\top (\mathbf{A} \mathbf{p} - \mathbf{d})\right)}, $$
where we stack the vector
Combining our two matrix expressions together we can write
$$ \begin{align*} \tilde{E}(\mathbf{p}) &= \frac{k}{2} \mathop\text{tr}{\left((\mathbf{A} \mathbf{p} - \mathbf{d})^\top (\mathbf{A} \mathbf{p} - \mathbf{d})\right)} + \mathop{\text{tr}}{ \left(\mathbf{p} - 2\mathbf{p}^{t} + \mathbf{p}^{t-\Delta t}\right)^\top \mathbf{M} \left(\mathbf{p} - 2\mathbf{p}^{t} + \mathbf{p}^{t-\Delta t}\right) } \mathop\text{tr}{\left(\mathbf{p}^\top \mathbf{f}^\text{ext}\right)} \&= \frac{1}{2} \mathop\text{tr}{\left( \mathbf{p}^\top (k \mathbf{A}^\top \mathbf{A} + \frac{1}{\Delta t^2 }\mathbf{M}) \mathbf{p} \right)}
- \mathop\text{tr}{\left(\mathbf{p}^\top(k \mathbf{A}^\top \mathbf{d} + \frac{1}{\Delta t^2 }\mathbf{M} (2\mathbf{p}^t - \mathbf{p}^{t-\Delta t}) + \mathbf{f}^\text{ext})\right)} + \text{ constants }. \end{align*} $$
Question: Why do we not bother to write out the terms that are constant with respect to
$\mathbf{p}$ ?
We can clean this up by introducing a few auxiliary matrices:
Now our optimization problem is neatly written as:
Recall: The trace operator behaves very nicely when differentiating.
$$\frac{\partial \mathop\text{tr}{\left(\mathbf{x}^\top \mathbf{y}\right)}}{\partial \mathbf{x}} = \mathbf{y}$$ and
$$\frac{\partial \frac12 \mathop\text{tr}{\left(\mathbf{x}^\top \mathbf{Y} \mathbf{x}\right)}}{\partial \mathbf{x}} = \mathbf{Y} \mathbf{x}$$
Taking a derivative with respect to
Since
From an algorithmic point of view the notation Qinv = inverse(Q)
and then conducting matrix
multiply p = Qinv * b
. This is almost always a bad idea. Constructing Qinv
be very expensive
Instead, we should think of the action of multiplying by the inverse of a
matrix as a single "solve" operation: p = solve(Q,b)
. Some programming
languages (such as MATLAB) indicate using operator overloading "matrix
division": p = Q \ b
.
All good matrix libraries (including Eigen) will implement this "solve" action. A very common approach is to compute a factorization of the matrix into a lower triangular matrix times it's transpose: $$ \mathbf{Q} = \mathbf{L} \mathbf{L}^\top. $$
Finding this
The action of solving against a triangular matrix is simple
forward-/back-substitution
and takes
A key insight of the Liu et al. paper is that our
// Once Q is known
L = precompute_factorization(Q)
// ... each time step
// ... ... each iteration
p = back_substitution(transpose(L),forward_substitution(L,b))
For small mass spring systems,
Fortunately, we can avoid this worst-case behavior by observing a special
structure in our matrices. Let's start with the mass matrix
Similarly, the matrix
We've reduced the storage required from
Because of this most sparse matrix libraries require (or prefer) to insert all
entries at once and presort non-zeros indices prefer creating the datastructure.
Friendly sparse matrix libraries like Eigen, will let us create a list list of
So if our dense matrix code looked something like:
Afull = zero(m,n)
for each pair i j
Afull(i,j) += v
end
By convention we use
+=
instead of=
to allow for repeated$(i,j)$ pairs in the list.
then we can replace this with
triplet_list = []
for each pair i j
triplet_list.append( i, j, v)
end
Asparse = construct_from_triplets( triplet_list )
Warning:
Do not attempt to set values of a sparse matrix directly. That is, do not write:
A_sparse(i,j) = v
Storing only the non-zero entries means we must rewrite all basic matrix
operations including (matrix-vector product, matrix addition, matrix-matrix
product, transposition, etc.). This is outside the scope of our assignment and
we will use Eigen's SparseMatrix
class.
Most important to our mass spring system is the solve action discussed above.
Similar to the dense case, we can precompute a factorization and use
substitution at runtime. For our sparse matrix, these steps will
be
Subject to the external force of gravity in
We can pin down vertices (e.g., those listed in b
) at their intial positions,
by requiring that their corresponding positions values
There are various ways we can introduce this simple linear equality constraint into the energy optimization above. For this assignment, we'll use the easy-to-implement penalty method. We will add an additional quadratic energy term which is minimized when our pinning constraints are satisfied:
where the w=1e10
). We can write this in matrix form as:
where
We can add these quadratic and linear coefficients to
Eigen::Triplet
igl::edge_lengths
igl::diag
igl::sparse
igl::massmatrix
.sparseView()
onEigen::MatrixXd
types
Write your dense code first. This will be simpler to debug.
At this point you should be able to run on small examples.
For example, running ./masssprings_dense ../data/single-spring-horizontal.json
should produce a swinging, bouncing spring:
If the single spring example is not working, debug immediately before proceeding to examples with more than one spring.
Running ./masssprings_dense ../data/horizontal-chain.json
will produce a hanging catenary chain:
Running ./masssprings_dense ../data/net.json
will produce a hanging catenary chain:
If you try to run ./masssprings_dense ../data/flag.json
you'll end up waiting
a while.
Start your sparse implementations by copying-and-pasting your correct dense code. Remove any dense operations and construct all matrices using triplet lists.
Now you should be able to see more complex examples, such as running
./masssprings_sparse ../data/flag.json
or ./masssprings_sparse ../data/skirt.json
:
This README file is too complex for texify to render. Use readme2tex locally to render the TeX to SVGs.
python -m readme2tex --output README.md README.tex.md --nocdn
sed -i 's/invert_in_darkmode\"/invert_in_darkmode\&sanitize=true\"/g' README.md