Notes on implementation:
Write a method gen_sinusoidal(N)
that generates toy data like in fig 1.2 of the MLPR book. The method should have a parameter $N$, and should return $N$-dimensional vectors $\bx$ and $\bt$, where $\bx$ contains evenly spaced values from 0 to (including) 2$\pi$, and the elements $t_i$ of $\bt$ are distributed according to:
where $x_i$ is the $i$-th elements of $\bf{x}$, the mean $\mu_i = sin(x_i)$ and the standard deviation $\sigma = 0.2$.
In [7]:
%matplotlib inline
import numpy as np
import math
import matplotlib.pyplot as plt
In [5]:
def gen_x(N):
offset = 2 * math.pi / (N - 1)
for i in range(N):
yield i * offset
gen_sinusoidal = lambda N: zip(*[(x, np.random.normal(math.sin(x), 0.2, 1)[0]) for x in gen_x(N)])
# Test
x, t = gen_sinusoidal(5)
print(x)
print(t)
Write a method fit_polynomial(x, t, M)
that finds the maximum-likelihood solution of an unregularized $M$-th order polynomial for some dataset x
. The error function to minimize w.r.t. $\bw$ is:
$E(\bw) = \frac{1}{2} (\bPhi\bw - \bt)^T(\bPhi\bw - \bt)$
where $\bPhi$ is the feature matrix (or design matrix) as explained in the MLPR book at section 3.1.1, $\bt$ is the vector of target values. Your method should return a vector $\bw$ with the maximum-likelihood parameter estimates.
In [8]:
get_phi = lambda X, M: np.array([[x ** i for i in range(M + 1)] for x in X])
def fit_polynomial(X, t, M):
Phi = get_phi(X, M)
PhiT = Phi.transpose()
w = np.linalg.inv(PhiT.dot(Phi)).dot(PhiT).dot(t)
return w
# Test
print(fit_polynomial(np.array([1,2,3]), np.array([4,5,6]), 4))
Sample a dataset with $N=9$, and fit four polynomials with $M \in (0, 1, 3, 9)$.
For each value of $M$, plot the prediction function, along with the data and the original sine function. The resulting figure should look similar to fig 1.4 of the MLPR book. Note that you can use matplotlib's plt.pyplot(.)
functionality for creating grids of figures.
In [ ]:
from matplotlib import animation
from JSAnimation import IPython_display
N = 9
fig = plt.figure()
fig.set_size_inches(12, 6)
ax = plt.subplot(121)
line, = ax.plot([], [], lw=2)
X = np.arange(-0.1, 2.1 * math.pi, 0.1)
t = np.sin(X)
ax.plot(X, t, label='original function')
ax.plot(X_train, t_train, '*', label='Points with noise')
ax.set_xlim([-0.1, 2.1 * math.pi])
ax.set_ylim([-1.5, 1.5])
X_train, t_train = gen_sinusoidal(N + 4)
ax1 = plt.subplot(122)
m = []
error = []
terror = []
error_line, = ax1.plot([], [], lw=2, label='Testing error')
terror_line, = ax1.plot([], [], lw=2, label='Training error')
def init():
line.set_data([], [])
error_line.set_data([], [])
terror_line.set_data([], [])
return line, error_line, terror_line
def animate(M):
w = fit_polynomial(X_train, t_train, M)
t_fit = get_phi(X, M).dot(w)
line.set_data(X, t_fit)
line.set_label('M=%d' % (M))
ax.legend(loc=1)
m.append(len(m))
trains_error = t_train - get_phi(X_train, M).dot(w)
terror.append(math.sqrt(trains_error.transpose().dot(trains_error)/len(X_train)) / 2)
terror_line.set_data(m, terror)
squared_error = np.array(t_fit) - np.array(t)
error.append(math.sqrt(squared_error.transpose().dot(squared_error)/len(X)) / 2)
ax1.set_xlim([m[0], m[-1]])
ax1.set_ylim([-0.1 + min(error), max(error) + 0.1])
error_line.set_data(m, error)
ax1.legend(loc=1)
return line, error_line, terror_line
animation.FuncAnimation(fig, animate, init_func=init, frames=15, interval=500, blit=True)
Write a method fit_polynomial_reg(x, t, M, lamb)
that fits a regularized $M$-th order polynomial to the sinusoidal data, as discussed in the lectures, where lamb
is the regularization term lambda. (Note that 'lambda' cannot be used as a variable name in Python since it has a special meaning). The error function to minimize w.r.t. $\bw$:
$E(\bw) = \frac{1}{2} (\bPhi\bw - \bt)^T(\bPhi\bw - \bt) + \frac{\lambda}{2} \mathbf{w}^T \mathbf{w}$
For background, see section 3.1.4 of the MLPR book.
In [62]:
def fit_polynomial_reg(X, t, M, lamb):
Phi = get_phi(X, M)
PhiT = Phi.transpose()
w = np.linalg.inv(PhiT.dot(Phi) + lamb * np.diag(np.ones(M + 1))).dot(PhiT).dot(t)
return w
# Test
print(fit_polynomial_reg(np.array([1,2,3]), np.array([4,5,6]), 4, 0.5))
Use cross-validation to find a good choice of $M$ and $\lambda$, given a dataset of $N=9$ datapoints generated with gen_sinusoidal(9)
. You should write a function that tries (loops over) a reasonable range of choices of $M$ and $\lambda$, and returns the choice with the best cross-validation error. In this case you can use $K=9$ folds, corresponding to leave-one-out crossvalidation.
You can let $M \in (0, 1, ..., 10)$, and let $\lambda \in (e^{-10}, e^{-9}, ..., e^{0})$.
To get you started, here's a method you can use to generate indices of cross-validation folds.
In [63]:
def kfold_indices(N, k):
all_indices = np.arange(N,dtype=int)
np.random.shuffle(all_indices)
idx = np.floor(np.linspace(0,N,k+1))
train_folds = []
valid_folds = []
for fold in range(k):
valid_indices = all_indices[idx[fold]:idx[fold+1]]
valid_folds.append(valid_indices)
train_folds.append(np.setdiff1d(all_indices, valid_indices))
return train_folds, valid_folds
k = 9
X_gen, t_gen = map(np.array, gen_sinusoidal(N))
def plot_approximation(M, lamb, Xi, ti):
w = fit_polynomial_reg(Xi, ti, M, lamb)
t_fit = get_phi(X, M).dot(w)
plt.plot(X, t_fit, label="M=" + str(M) + ", lamb=" + str(lamb))
def plot_orig():
X = np.arange(-0.1, 2.1 * math.pi, 0.1)
t = np.sin(X)
plt.plot(X, t, linewidth=5, label='Original function')
plt.plot(X_gen, t_gen, '*', label='Points with noise')
def cross_validate(M, lamb):
for train_set, test_set in zip(*kfold_indices(N, k)):
X_train = X_gen[train_set]
t_train = t_gen[train_set]
X_test = X_gen[test_set]
t_test = t_gen[test_set]
w = fit_polynomial_reg(X_train, t_train, M, lamb)
t_fit = get_phi(X_test, M).dot(w)
t_fits = get_phi(X, M).dot(w)
yield math.pow(t_fit[0] - t_test[0], 2), t_fits
M_opt = 0
lamb_opt = 0
min_avg_error = 10000
for M in range(11):
for lamb in [math.exp(-i) for i in range(11)]:
avg_error = np.average([error for error, _ in cross_validate(M, lamb)])
if min_avg_error > avg_error:
min_avg_error = avg_error
M_opt = M
lamb_opt = lamb
print 'Min Error:', min_avg_error
print 'M_opt:', M_opt
print 'lamb_opt:', lamb_opt
In [149]:
from mpl_toolkits.mplot3d import Axes3D
def plot_cross_validation_error():
points = [(lamb, M, np.average([error for error, _ in cross_validate(M, lamb)]))
for M in range(11)
for lamb in [math.exp(-i) for i in range(11)]]
lambs, Ms, errors = zip(*points)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(lambs, Ms, errors, c='g', marker='o', s=10)
ax.scatter([lamb_opt], [M_opt], [min_avg_error], c='r', marker='^', s=100)
ax.set_xlabel('Lambda')
ax.set_ylabel('M')
ax.set_zlabel('Square Error')
ax.view_init(elev=200)
plt.show()
plot_cross_validation_error()
In [150]:
def plot_cross_validation(M, lamb):
fig = plt.figure()
fig.set_size_inches(14, 8)
ax = plt.subplot(121)
ax.set_title('M=' + str(M) + ' lamb=' + str(lamb))
mean = np.sum(np.array([t_fit for _, t_fit in cross_validate(M, lamb)]), axis=0) / 9
for i, (_, t_fit) in enumerate(cross_validate(M, lamb)):
plt.plot(X, t_fit, label="approximation " + str(i))
plt.plot(X, mean, linewidth=3, label="Mean", color='r')
ax = plt.subplot(122)
plot_orig()
plt.plot(X, mean, linewidth=3, label="Mean", color='r')
plt.legend(bbox_to_anchor=(1.05, 1), loc=1, borderaxespad=0.)
plt.show()
plot_cross_validation(M_opt, lamb_opt)
plot_cross_validation(6, 0.01)
plot_cross_validation(6, 0.1)
plot_cross_validation(6, 1)
In [69]:
N = 9
fig = plt.figure()
fig.set_size_inches(12, 6)
ax = plt.subplot(121)
line, = ax.plot([], [], lw=2)
X = np.arange(-0.1, 2.1 * math.pi, 0.1)
t = np.sin(X)
ax.plot(X, t, label='original function')
ax.plot(X_train, t_train, '*', label='Points with noise')
ax.set_xlim([-0.1, 2.1 * math.pi])
ax.set_ylim([-1.5, 1.5])
X_train, t_train = gen_sinusoidal(N + 4)
ax1 = plt.subplot(122)
m = []
error = []
terror = []
error_line, = ax1.plot([], [], lw=2, label='Testing error')
terror_line, = ax1.plot([], [], lw=2, label='Training error')
def init():
line.set_data([], [])
error_line.set_data([], [])
terror_line.set_data([], [])
return line, error_line, terror_line
def animate(M):
w = fit_polynomial_reg(X_train, t_train, M, lamb_opt)
t_fit = get_phi(X, M).dot(w)
line.set_data(X, t_fit)
line.set_label('M=%d' % (M))
ax.legend(loc=1)
m.append(len(m))
trains_error = t_train - get_phi(X_train, M).dot(w)
terror.append(math.sqrt(trains_error.transpose().dot(trains_error)/len(X_train)) / 2)
terror_line.set_data(m, terror)
squared_error = np.array(t_fit) - np.array(t)
error.append(math.sqrt(squared_error.transpose().dot(squared_error)/len(X)) / 2)
ax1.set_xlim([m[0], m[-1]])
ax1.set_ylim([-0.1 + min(error), max(error) + 0.1])
error_line.set_data(m, error)
ax1.legend(loc=1)
return line, error_line, terror_line
animation.FuncAnimation(fig, animate, init_func=init, frames=15, interval=500, blit=True)
Out[69]:
Create a comprehensible plot of the cross-validation error for each choice of $M$ and $\lambda$. Highlight the best choice.
Question: Explain over-fitting and underfitting, illuminated by your plot. Explain the relationship with model bias and model variance.
Answer: TODO: Answer
In [123]:
fig = plt.figure()
fig.set_size_inches(14, 8)
print 'M_opt:', M_opt
print 'lamb_opt:', lamb_opt
plot_orig()
X_gen, t_gen = map(np.array, gen_sinusoidal(N))
print 'New dataset:', X_gen, t_gen
plot_approximation(M_opt, lamb_opt, X_gen, t_gen)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
In [117]:
import numpy as np
import math
gen_sinusoidal2 = lambda N: zip(*[(x, np.random.normal(math.sin(x), 0.2, 1)[0]) for x in np.random.uniform(0, 2 * math.pi, N)])
# Test
x, t = gen_sinusoidal2(5)
print(x)
print(t)
You're going to implement a Bayesian linear regression model, and fit it to the sinusoidal data. Your regression model has a zero-mean isotropic Gaussian prior over the parameters, governed by a single (scalar) precision parameter $\alpha$, i.e.:
$$p(\bw \;|\; \alpha) = \mathcal{N}(\bw \;|\; 0, \alpha^{-1} \bI)$$The covariance and mean of the posterior are given by:
$$\bS_N= \left( \alpha \bI + \beta \bPhi^T \bPhi \right)^{-1} $$$$\bm_N = \beta\; \bS_N \bPhi^T \bt$$where $\alpha$ is the precision of the predictive distribution, and $\beta$ is the noise precision. See MLPR chapter 3.3 for background.
Write a method fit_polynomial_bayes(x, t, M, alpha, beta)
that returns the mean $\bm_N$ and covariance $\bS_N$ of the posterior for a $M$-th order polynomial, given a dataset, where x
, t
and M
have the same meaning as in question 1.2.
In [118]:
def fit_polynomial_bayes(X, t, M, alpha, beta):
Phi = np.array([[x ** i for i in range(M + 1)] for x in X])
PhiT = Phi.transpose()
Sn = np.linalg.inv(alpha * np.diag(np.ones(M + 1)) + beta * PhiT.dot(Phi))
mn = beta * Sn.dot(PhiT).dot(t)
return mn, Sn
# Test
X = np.array([1, 2, 3])
t = np.array([4, 5, 6])
mn, Sn = fit_polynomial_bayes(X, t, 5, 1, 2)
print(mn)
print(Sn)
The predictive distribution of Bayesian linear regression is:
$$ p(t \;|\; \bx, \bt, \alpha, \beta) = \mathcal{N}(t \;|\; \bm_N^T \phi(\bx), \sigma_N^2(\bx))$$$$ \sigma_N^2 = \frac{1}{\beta} + \phi(\bx)^T \bS_N \phi(\bx) $$where $\phi(\bx)$ are the computed features for a new datapoint $\bx$, and $t$ is the predicted variable for datapoint $\bx$.
Write a function that predict_polynomial_bayes(x, m, S, beta)
that returns the predictive mean and variance given a new datapoint x
, posterior mean m
, posterior variance S
and a choice of model variance beta
.
In [119]:
def predict_polynomial_bayes(X, m, S, beta, M = 5):
Phi = get_phi(X, M)
PhiT = Phi.transpose()
mean = m.dot(PhiT)
variance = 1 / beta + np.sum(Phi.dot(S) * Phi, axis=1)
return mean, variance
a) (5 points) Generate 7 datapoints with gen_sinusoidal2(7)
. Compute the posterior mean and covariance for a Bayesian polynomial regression model with $M=5$, $\alpha=\frac{1}{2}$ and $\beta=\frac{1}{0.2^2}$.
Plot the Bayesian predictive distribution, where you plot (for $x$ between 0 and $2 \pi$) $t$'s predictive mean and a 1-sigma predictive variance using plt.fill_between(..., alpha=0.1)
(the alpha argument induces transparency).
Include the datapoints in your plot.
b) (5 points) For a second plot, draw 100 samples from the parameters' posterior distribution. Each of these samples is a certain choice of parameters for 5-th order polynomial regression. Display each of these 100 polynomials.
In [120]:
X_gen, t_gen = gen_sinusoidal2(7)
M = 5
alpha = 1 / 2
beta = 1 / (0.2 * 0.2)
fig = plt.figure()
fig.set_size_inches(14, 8)
ax = plt.subplot(111)
X = np.arange(-0.00001, 2.0001 * math.pi, 0.01)
t = np.sin(X)
ax.plot(X, t, label='Original function')
ax.plot(X_gen, t_gen, '*', label='Points with noise')
mn, Sn = fit_polynomial_bayes(X_gen, t_gen, M, alpha, beta)
means, variances = predict_polynomial_bayes(X, mn, Sn, beta)
variances = np.sqrt(variances)
ax.plot(X, means, 'r', label='Mean')
ax.fill_between(X, means - variances, means + variances, alpha=0.1)
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
fig = plt.figure()
fig.set_size_inches(14, 8)
ax = plt.subplot(111)
ax.plot(X_gen, t_gen, '*', label='Points with noise')
ax.fill_between(X, means - variances * 3, means + variances * 3, alpha=0.5)
ms = np.random.multivariate_normal(mn, Sn, 100)
for i in range(100):
m = ms[i]
means, variances = predict_polynomial_bayes(X, m, Sn, beta)
ax.plot(X, means, 'r', label='Mean')
plt.show()
a) (5 points) Why is $\beta=\frac{1}{0.2^2}$ the best choice of $\beta$ in section 2.4?
b) (5 points) In the case of Bayesian linear regression, both the posterior of the parameters $p(\bw \;|\; \bt, \alpha, \beta)$ and the predictive distribution $p(t \;|\; \bw, \beta)$ are Gaussian. In consequence (and conveniently), $p(t \;|\; \bt, \alpha, \beta)$ is also Gaussian (See MLPR section 3.3.2 and homework 2 question 4). This is actually one of the (rare) cases where we can make Bayesian predictions without resorting to approximative methods.
Suppose you have to work with some model $p(t\;|\;x,\bw)$ with parameters $\bw$, where the posterior distribution $p(\bw\;|\;\mathcal{D})$ given dataset $\mathcal{D}$ can not be integrated out when making predictions, but where you can still generate samples from the posterior distribution of the parameters. Explain how you can still make approximate Bayesian predictions using samples from the parameters' posterior distribution.
a) Because we have generated our data with a standard deviation for noise $\sigma = 0.2$, therefore $\beta=\frac{1}{\sigma^2}=\frac{1}{0.2^2}$, that's a noise precision will be the best choice of $\beta$ in section 2.4.
In [120]: