%matplotlib inline
import matplotlib.pyplot as plt
# Mandatory imports...
import numpy as np
import torch
from torch import tensor
from torch.nn import Parameter, ParameterList
from torch.autograd import grad
# Custom modules:
from model import Model # Optimization blackbox
from display import plot, train_and_display
from torch_complex import rot # Rotations in 2D
We've seen how to work with unlabeled segmentation masks encoded as measures $\alpha$ and $\beta$: the key idea here is to use a data fidelity term which is well-defined up to resampling.
def scal( α, f ) :
"Scalar product between two vectors."
return torch.dot( α.view(-1), f.view(-1) )
def sqdistances(x_i, y_j) :
"Matrix of squared distances, C_ij = |x_i-y_j|^2."
return ( (x_i.unsqueeze(1) - y_j.unsqueeze(0)) ** 2).sum(2)
def distances(x_i, y_j) :
"Matrix of distances, C_ij = |x_i-y_j|^2."
return (x_i.unsqueeze(1) - y_j.unsqueeze(0)).norm(p=2,dim=2)
def fidelity(α, β) :
"Energy Distance between two sampled probability measures."
α_i, x_i = α
β_j, y_j = β
K_xx = -distances(x_i, x_i)
K_xy = -distances(x_i, y_j)
K_yy = -distances(y_j, y_j)
cost = .5*scal( α_i, K_xx @ α_i ) \
- scal( α_i, K_xy @ β_j ) \
+ .5*scal( β_j, K_yy @ β_j )
return cost
Most often, we work with measures that have been rigidly aligned with each other:
class IsometricRegistration(Model) :
"Find the optimal translation and rotation."
def __init__(self, α) :
"Defines the parameters of a rigid deformation of α."
super(Model, self).__init__()
self.α, self.x = α[0].detach(), α[1].detach()
self.θ = Parameter(tensor( 0. )) # Angle
self.Ï„ = Parameter(tensor( [0.,0.] )) # Position
def __call__(self, t=1.) :
# At the moment, PyTorch does not support complex numbers...
x_t = rot(self.x, t*self.θ) + t*self.τ
return self.α, x_t
def cost(self, target) :
"Returns a cost to optimize."
return fidelity( self(), target)
from sampling import load_measure
α_0 = load_measure("data/hippo_a.png")
β = load_measure("data/hippo_b.png")
print("Number of atoms : {} and {}.".format(len(α_0[0]), len(β[0])))
# In practice, we often quotient out affine deformations:
isom = IsometricRegistration(α_0)
isom.fit(β)
α = isom()
α = α[0].detach().requires_grad_(), α[1].detach().requires_grad_()
# Let's display everyone:
plt.figure(figsize=(10,10))
plot(β, "blue", alpha=.7)
plot(α_0, "purple", alpha=.2)
plot(α, "red", alpha=.7)
plt.axis("equal")
plt.axis([0,1,0,1])
plt.show()
Let us now present some weakly parametrized models that let us study the variability between $\alpha$ and $\beta$ in a continuous way. In the previous notebook, we've seen that the "free particles" setting could be implemented as follows - with control variables $(p_i)\in\mathbb{R}^{N\times 2}$:
class L2Registration(Model) :
"Find the optimal mass positions."
def __init__(self, α) :
"Defines the parameters of a free deformation of α."
super(Model, self).__init__()
self.α, self.x = α[0].detach(), α[1].detach()
self.p = Parameter(torch.zeros_like(self.x))
def __call__(self, t=1.) :
"Applies the model on the source point cloud."
x_t = self.x + t*self.p
return self.α, x_t
def cost(self, target) :
"Returns a cost to optimize."
return fidelity( self(), target)
l2_reg = train_and_display(L2Registration, α, β)
Is this a desirable result?
Smoothing. Crucially, the "free particles" setting does not enforce any kind of smoothness on the deformation, and can induce tearings in the registration. Using a kernel norm as a fidelity, we thus observe left-out particles that lag behind because of the fidelity's vanishing gradients. Going further, we may alleviate this problem with Wasserstein-like fidelities... But even then, we may then observe mass splittings instead of, say, rotations when trying to register two star-shaped measures - try it with an "X" and a "+"!
To alleviate this problem, a simple idea is to smooth the displacement field. That is, to use a vector field
$$ \begin{align} v(x)~=~\sum_{i=1}^N k(x-x_i)\,p_i ~=~ \big(k\star \sum_{i=1}^N p_i\,\delta_{x_i}\big)(x) \end{align} $$
to move around our Dirac masses, with $k$ a blurring kernel function - e.g. a Gaussian.
Regularization. Keep in mind that if the Fourier transform of $k$ is positive, any sampled vector field $v$ may be generated through such linear combinations. To enforce some kind of prior, we thus have to introduce an additional regularization term to our cost function.
From a theoretical point of view, using a positive kernel, the best penalty term is the dual kernel norm
$$ \begin{align} \big\|\sum_{i=1}^N p_i\,\delta_{x_i}\big\|_k^2 ~=~ \big\langle \sum_{i=1}^N p_i\,\delta_{x_i}, k\star \sum_{i=1}^N p_i\,\delta_{x_i} \big\rangle ~=~\sum_{i,j=1}^N \langle p_i,p_j\rangle\,k(x_i-x_j) \end{align} $$
which can be rewritten as a Sobolev-like penalty on the vector field $v$: $$ \begin{align} \big\|\sum_{i=1}^N p_i\,\delta_{x_i}\big\|_k^2 ~=~\iint_{\omega\in\mathbb{R}^2} \frac{|\widehat{v}(\omega)|^2}{\widehat{k}(\omega)}\,\text{d}\omega ~=~ \|v\|_{k}^{*2}. \end{align} $$
As we strive to optimize a cost that reads
$$ \begin{align} \text{Cost}(p) ~~=~~ \lambda_1\,\text{Fidelity}\big(\sum \alpha_i \delta_{x_i+v_i}, \beta\big) ~+~ \lambda_2\,\big\|\sum p_i\,\delta_{x_i}\big\|_k^2, \end{align} $$
the algorithm will find a compromise between accuracy and stretching.
Kernel dot product. In practice, we use a sampled interpolator
$$ \begin{align} \Phi^k_p~:~ x=(x_i)\in\mathbb{R}^{N\times2} ~\mapsto~ x~+~v~=~ x~+~ K_{xx} p, \end{align} $$
where $K$ is the kernel matrix of the $x_i$'s and $p$ is the momentum field encoded as an N-by-2 tensor.
Exercise 1: Implement a (linear) kernel registration module.
run -i nt_solutions/measures_2/exo1
Smooth linear deformations have been extensively studied in the 90's, especially with Bookstein's Thin Plate Spline kernel:
$$ \begin{align} \|v\|_{k}^{*2} ~~=~~ \iint_{\mathbb{R}^2} \|\partial^2_{xx} v(x)\|^2~ \text{d}x \end{align} $$
which allows us to retrieve affine deformations for free.
Limits of the linear model. Assuming that the model $\Phi(\alpha)$ is close enough to $\beta$, we may use the matching momentum $p$ to characterize the deformation $\alpha\rightarrow\beta$. Its $k$-norm can be computed through
$$ \begin{align} \big\|\sum_{i=1}^N p_i\,\delta_{x_i}\big\|_k ~~=~~ \sqrt{\langle p, K_{xx}p\rangle}_{\mathbb{R}^{N\times2}} \end{align} $$
and can be used as a "shape distance" $\text{d}_k(\alpha\rightarrow \beta)$ that penalizes tearings. Going further, we may compute the Fréchet mean of a population and perform kernel PCA on the momenta that link a mean shape to the observed samples.
Path distance in a space of measures. Unfortunately though, such an approach has a major drawback: The "kernel distance" is not symmetric. As it only makes use of the kernel matrix $K_{xx}$ computed on the source point cloud, it induces a bias which may be detrimental to statistical studies.
An elegant solution to this problem is to understand the kernel cost
$$ \langle p, K_{xx} p\rangle ~=~ \langle v, K_{xx}^{-1} v\rangle $$
as a Riemannian, infinitesimal metric that penalizes small deformations. We may then look for trajectories
$$ \begin{align} \alpha(\cdot)~:~t\in[0,1] ~\mapsto~ \alpha(t)~=~\sum_{i=1}^N \alpha_i\,\delta_{x_i(t)} \end{align} $$
such that $\alpha(0)=\alpha$ and minimize a cost
$$ \begin{align} \text{Cost}(\alpha(\cdot)) ~~=~~ \lambda_1\,\text{Fidelity}(\alpha(1),\beta) ~+~ \lambda_2\,\int_0^1 \underbrace{\big\langle \dot{x}(t), K_{x(t),x(t)}^{-1}\,\dot{x}(t)\big\rangle}_{\|\dot{x}(t)\|_{x(t)}^2}\,\text{d}t \end{align} $$
with a regularization matrix that evolves along the path.
Momentum field. In practice, inverting the smoothing matrix is an ill-posed problem. We may thus parameterize the problem through the momentum field $(p_i(t))\in\mathbb{R}^{N\times 2}$ and write
$$ \begin{align} \text{Cost}(\alpha(\cdot)) ~~=~~ \lambda_1\,\text{Fidelity}(\alpha(1),\beta) ~+~ \lambda_2\,\int_0^1 \big\langle p(t), K_{x(t),x(t)}\,p(t)\big\rangle\,\text{d}t, \end{align} $$
keeping in mind that $x(0)$ is given by $\alpha$ and that $\dot{x}=v=K_{x,x}p$
Exercise 2: Implement a path registration method, sampling the interval $[0,1]$ with 10 timesteps.
run -i nt_solutions/measures_2/exo2
Riemannian geodesics. You may have guessed it already: the cost above involves the resolution of a shortest-path-finding problem in the Riemannian manifold of point clouds $x\in\mathbb{R}^{N\times 2}$ endowed with a metric
$$ \begin{align} \langle v, g_x v\rangle~=~\langle v, K_{x,x}^{-1} v\rangle. \end{align} $$
Fortunately, length-minimizing paths in a Riemannian manifold satisfy a simple ODE: the geodesic equation which enforces "local straightness". In the position-velocity coordinates $(x,v=\dot{x})$, it can be written using the (in)famous Christoffel symbols. In the position-momentum coordinates $(q=x, p=K_{x,x}^{-1}\dot{x})$, it can be simply written as the symplectic gradient flow of the kinetic energy.
If we define the Hamiltonian as the kinetic energy function
$$ \begin{align} \text{H}~:~(q,p)\in\mathbb{R}^{N\times 2} ~\mapsto~ \text{H}(q,p)~=~\tfrac{1}{2}\langle p, K_{q,q}\,p\rangle ~=~\tfrac{1}{2}\|v\|_q^2, \end{align} $$
a curve $x(t)$ is geodesic if and only if it flows along the symplectic gradient (i.e. the gradient rotated by "-90°") of the Hamiltonian in the phase space $(q,p)$:
$$ \begin{align} \left\{ \begin{array}{c} \dot{q}_t~=~+\tfrac{\partial \text{H}}{\partial p}(q_t,p_t) \\ \dot{p}_t~=~-\tfrac{\partial \text{H}}{\partial q}(q_t,p_t) \end{array} \right. \end{align} $$
Exponential mapping. In practice, this equation means that we can generalize the $\text{Exp}$ mapping defined on the sphere to generic Riemannian manifolds: the routine below takes as input a point cloud $q$ and a momentum field $p$, and computes to time $t=1$ the unique solution $(q_t)$ of the geodesic equation such that
$$ \begin{align} q_0~=~ q ~~~\text{and}~~~ \dot{q}_0~=~v~=~K_{q,q}p. \end{align} $$
def K_xx(x_i, σ = .1) :
return (-sqdistances(x_i,x_i)/σ**2 ).exp()
def H(q, p) :
return .5 * scal( p, K_xx(q) @ p )
def Exp(q,p) :
"Simplistic Euler solver for Hamilton's geodesic equation."
for _ in range(10) :
[dq, dp] = grad(H(q,p), [q,p], create_graph=True)
q = q + .1* dp
p = p - .1* dq
return q
Geodesic shooting. Thanks to the exponential map that parameterizes geodesics in our shape manifold, we may optimize in the small vector space of momenta at time $t=0$ instead of using the full (redundant) space of trajectories.
In practice, the geodesic shooting algorithm (from optimal control theory) is thus all about minimizing a cost
$$ \begin{align} \text{Cost}(p_0) ~~=~~ \lambda_1\,\text{Fidelity}\big(\,\text{Exp}(q_0,p_0),\beta\,\big) ~+~ \lambda_2\,\langle p_0, K_{q_0,q_0}\,p_0\rangle, \end{align} $$
where $q_0$ encodes the source measure $\alpha$.
Exercise 3: Implement a Riemannian shooting registration method. Why is it close to a Riemannian $\text{Log}$ function?
run -i nt_solutions/measures_2/exo3
Exercise 4: Playing around with the kernel function and the densities, investigate the properties of this algorithm.
Large deformations and diffeomorphisms. In this notebook, we've seen that we could endow arbitrary shape spaces with tearing-adverse Riemannian structures, be they flat (linear) or curved (the generic case). Crucially, Hamilton's geodesic equation lets us implement the $\text{Exp}$ operator in terms of the cometric
$$ \begin{align} K_{q,q}~=~g_q^{-1} \end{align} $$
as we generalize the statistical analysis that we performed in Kendall's shape space.
Studied in the last 15 years, the LDDMM framework is all about using a kernel matrix as a cometric on spaces of point clouds. Remarkably, the exponential mappings can then be extended into diffeomorphisms of the ambient space, whose smoothness is controlled by the kernel norm of the shooting momentum.
In practice. These methods only make use of a convolution dot product
$$ \begin{align} k\star\cdot~:~ p=\sum p_i\,\delta_{x_i}~\mapsto~ \big( k\star p: x\mapsto \sum k(x-x_i)\,p_i \big) \end{align} $$
Unsurprisingly, they can thus be accelerated using appropriate numerical schemes. Let us mention Flash for a Fourier domain implementation or the KeOps library for fast convolutions on point clouds with the GPU: combined with clever dimensionality reductions, these toolboxes let us routinely work with 3D scans or meshes with ~100,000 vertices.
Conclusion. Going further, the method presented here has two main advantages:
On the other hand, it also has two majors drawbacks:
In years to come, thanks to the development of PyTorch and TensorFlow, we thus expect to see major advances in the field. For instance, memoizing the $\text{Log}$ mapping in a CNN autoencoder is a promising approach for brain registration. All things considered, here are the two main points that you should keep in mind from these workshop sessions:
Which (co)metric should you choose? LDDMM's convolution cometrics are fully parallel, and are thus the cheapest available on point clouds and images; in a medical setting, learning relevant metrics will be a major research topic in the coming years.