In this tutorial article, we want to learn how to combine landmark points together in order to create new faces. This process is used in developing Active Shape Model (ASM), which is a technique for detecting landmarks (key facial points) on a face image. This is an example of landmark points on a face image:
In ASM, first you start training a model using a set of pre-annotated faces, where a fixed number of landmarks are given, and the goal is to automatically identify landmarks on a new face image.
The goal of this tutorial is to walk you through the first step of ASM training. We have a dataset of images and their landmark points. We will 1) align all the landmarks, 2) compute the mean face 3) create a model for generating new faces using eigen faces.
For this tutorial, we use MUCT database: www.milbo.org/muct which contains 3755 faces with manually landmarked 76 points for each face.
In [1]:
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
import pandas
After downloading the dataset, unzip the landmarks, and use a file named muct76.csv
.
In [2]:
df = pandas.read_csv('muct76.csv', header=0, usecols=np.arange(2,154), dtype=float)
df.head()
Out[2]:
The df contains both $x$ and $y$ coordinates of landmark points. There are 3755 images in the dataset, but the reverse (mirrored) images are also considered here, so total we have 7510 rows. Next, let us split the $x$, $y$ coordinates into separate numpy arrays for convinience.
In [4]:
X = df.iloc[:, ::2].values
Y = df.iloc[:, 1::2].values
print(X.shape, Y.shape)
Now, let us visualize it to get more familar with the data. We can plot the landmarks for one image using matplotlib. Landmarks identify eyebrows, eyes, face boundary, nose, upper and lower lips. These facial points are exteremly useful for a lot of computer vision applications.
In [5]:
plt.plot(X[0,:], Y[0,:])
plt.show()
In order to compute the mean-face, we need to align all faces together. Typically for alignment we need to do three forms of transformation: linear translation, scaling and rotation. Assuming that all the images have straight pose, we can skip the rotation. The general formula for transformation is the following
$$\mathrm T\displaystyle \left(\begin{array}{c}x\\y\end{array}\right) = s \left[ \begin{array}{cc} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{array}\right] \left( \begin{array}{c}x \\ y \end{array}\right) + \left(\begin{array}{c} x_{translate} \\ y_{translate} \end{array} \right) $$Where, $s$ is the scaling factor, $\theta$ is the rotation angle, and $x_{translate}$ and $y_{translate}$ are the translations in $x, y$ coordinates.
We want to do two forms of transformation: linear translation, and scaling.
For mean centering, first, we need to find the center of each face, i.e. the mean of all $x$ and $y$ coordinates. Then, we can apply a linear translation so that the center of each face is located at coordinate $(0, 0)$. This way, we can overlap all the images.
In [6]:
xmeans = np.mean(X, axis=1)
ymeans = np.mean(Y, axis=1)
xmeans.shape
Out[6]:
In [7]:
## mean-centering each image
X = (X.T - xmeans).T
Y = (Y.T - ymeans).T
For scaling, we need to normalize the faces to a fixed value. One idea is to scale based on the interpupillary distance (IPD), the distance between the center of pupils, to a fixed value, while keeping the same aspect ratio in the original face. The center of pupils are marked as landmark indexed at 31 and 36.
In [9]:
for i, (fx, fy) in enumerate(zip(X, Y)):
interpupils_dist = np.abs(fx[31] - fx[36])
X[i,:] = X[i,:] * 100 / interpupils_dist
Y[i,:] = Y[i,:] * 100 / interpupils_dist
In [20]:
plt.figure(figsize=(10,10))
plt.subplot(2, 2, 1)
plt.plot(X[0,:], Y[0,:])
plt.subplot(2, 2, 2)
plt.plot(X[100,:], Y[100,:])
plt.subplot(2, 2, 3)
plt.plot(X[200,:], Y[200,:])
plt.subplot(2, 2, 4)
plt.plot(X[300,:], Y[300,:])
plt.show()
In [21]:
mean_face_x = np.mean(X, axis=0)
mean_face_y = np.mean(Y, axis=0)
plt.plot(mean_face_x, mean_face_y)
plt.show()
We want to develop a model that by chaning its parameters, we can generate new faces. But, we can arbitralily apply some distortions to faces.
In order to generate new faces, we can compute some vectors that have the highest variations among our current faces. Then, we can apply change the mean face along those faces which gives us a new face. This can be done by computing the eigen vevtors of the covariance matrix of these faces, also we call it "Eigen Faces".
In [22]:
D = np.concatenate((X, Y), axis=1)
D.shape
Out[22]:
In [23]:
cov_mat = np.cov(D.T)
cov_mat.shape
Out[23]:
In [24]:
eig_values, eig_vectors = np.linalg.eig(cov_mat)
print(eig_values.shape, eig_vectors.shape)
In [25]:
plt.plot(eig_values)
Out[25]:
In [26]:
num_eigs = 20
Phi_matrix = eig_vectors[:,:num_eigs]
Phi_matrix.shape
Out[26]:
Finally, everything is ready to generate new faces. Applying a distortion along those eigen faces generates a new face according to the following formula:
$$\hat{r} = \bar{r} + \phi b$$where $\hat{r}$ is the new face, $\bar{r}$ is the mean face, $\phi$ is the matrix of eigen values, and $b$ is the distortion parameter along each eigen vector.
In [29]:
def construct_newface(meanface, Phi_matrix, b):
face = meanface + np.dot(Phi_matrix, b )
return (face[:76], face[76:])
For clarity, we generate distorion vectors of all zero values except one non-zero element. That non-zero element is a multiplication of the corresponding eigen-value times a small factor.
In [46]:
meanface = np.concatenate((mean_face_x, mean_face_y))
plt.figure(figsize=(10,10))
for i in range(4):
plt.subplot(2, 2, i+1)
b = np.zeros(shape=num_eigs)
for j in (-0.025, -0.02, -0.015, -0.01, 0, 0.01, 0.015, 0.02, 0.025):
b[i] = j*eig_values[i]
xnew, ynew = construct_newface(meanface, Phi_matrix, b=b)
plt.plot(xnew, ynew)
plt.show()
In this article, we used a dataset of dace landmarks and developed a model for generating new faces. First, we applied scaling and linear translation to align the faces together. Then, we computed the mean face ($\bar{r}$) and the matrix eigen vectors ($\phi$). Then, using ditortions along the first four eigen vector, we generated new faces.
In the new faces, distortions along the first eigen vector resulted in mostly changing the width of faces, while the second eigen vector changes the face height and the shape of nose.