This lab consists in implementing the Ordinary Least Squares (OLS) algorithm, which is a linear regression with a least-squares penalty. Given a training set $ D = \left\{ \left(x^{(i)}, y^{(i)}\right), x^{(i)} \in \mathcal{X}, y^{(i)} \in \mathcal{Y}, i \in \{1, \dots, n \} \right\}$, recall (from lectures 1 and 2) OLS aims at minimizing the following cost function $J$: $$J(\theta) = \dfrac{1}{2} \sum_{i = 1}^{n} \left( h\left(x^{(i)}\right) - y^{(i)} \right)^2$$ where $$h(x) = \sum_{j = 0}^{d} \theta_j x_j = \theta^T x.$$
For the sake of simplicity, we will be working on a small training set (the one we used in lectures 1 and 2) $D$:
living area (m$^2$) | price (1000's BGN) |
---|---|
50 | 30 |
76 | 48 |
26 | 12 |
102 | 90 |
In [ ]:
X = ?
Y = ?
In this simple example, the dimensionality is $d = 1$ and the number of samples is $n = 4$.
In [ ]:
def predict():
#TODO
return
Exercise: Define a function cost_function
that takes as parameter the predicted label $y$ and the actual label $\hat{y}$ of a single sample and returns the loss of this pair given by:
$$ \ell \left( y - \hat{y} \right) = \dfrac{1}{2}\left( y - \hat{y} \right)^2$$
In [ ]:
def cost_function():
#TODO
return
Now we are able to compute the cost function for a single sample, we can easily compute the cost function for the whole training set by summing the cost function values for all the samples in the training set. Recall that the total cost function is given by: $$J(\theta) = \dfrac{1}{2} \sum_{i = 1}^{n} \left( h\left(x^{(i)}\right) - y^{(i)} \right)^2$$ where, for all $i \in \{ 1, \dots, n \}$, we have: $$h\left(x^{(i)}\right) = \sum_{j = 0}^{d} \theta_j x^{(i)}_j = \theta^T x.$$
In [ ]:
def cost_function_total():
#TODO
return
Let's now test the code written above and check the total cost function we would have when $\theta = [0, 0]$
In [ ]:
#TODO
Exercise: Define a function gradient
that implements the gradient of the cost function for a given sample $(x, y)$. Recall from the lectures 1 and 2 that the gradient is given by:
$$\nabla J(\theta) = \left[ \dfrac{\partial}{\partial \theta_1} J(\theta), \dots, \dfrac{\partial}{\partial \theta_d} J(\theta) \right]^T$$
where, for all $j \in \{0, \dots, d \}$:
$$ \dfrac{\partial}{\partial \theta_j} J(\theta) = \left( h\left(x\right) - y \right) x_j $$
In [ ]:
def gradient():
#TODO
return
In our application, the dimensionality of our data (with the intercept) is 2, so the gradient has 2 values (corresponding to $\theta_0$ and $\theta_1$).
In [ ]:
#TODO
In [ ]:
def gradient_total():
#TODO
return
Let's now test the code written above and check the total gradient we would have when $\theta = [0, 0]$
In [ ]:
#TODO
Question: What is the sign of the gradient values? What would it mean if we had such a gradient when applying a gradient descent?
Hint: Recall the gradient descent update.
We now have all the building blocs needed for the gradient descent algorithm, that is:
Exercise: Define a function called gradient_descent_step
that performs an update on theta by applying the formula above.
In [ ]:
def gradient_descent_step():
#TODO
return
Try to run a few iterations manually. Play with the value of $\alpha$ to see how it impacts the algorithm.
In [ ]:
#TODO
The gradient_descent_step
implements a single gradient step of the gradient descent algorithm. Implement a function called gradient_descent
that starts from a given $\theta$ (exaple $\theta = [0, 0]$) and applies 100 gradient descent iterations. Display the total cost function $J(\theta)$ at each iteration.
In [ ]:
def gradient_descent():
#TODO
return
Play with the function you've just implemented. Try different values of $\alpha$ and see the impact it has.
In [ ]:
#TODO
Question: Looks at the evolution of the cost function over time. What comment can you make?
Question: What does the value theta_trained
represents?
As we have seen during the lecture 1, the gradient descent methods are often split into 2 different subfamilies:
gradient_descent_step
and gradient_descent
) corresponds to the batch version because it sums the gradient of all the samples in the training set. During the next session, we will work on the following:
Exercise: Try to implement the stochastic version of the gradient descent algorithm.