What is Machine Learning?
Any machine learning problem can be assigned to one of two broad classifications:
In supervised learning we deal with a dataset that has the right answers, and we know how the correct output look like.
In unsupervised learning we don't know what is the output look like. We can derive structure from data where we don't necessarily know the effect of the variables.
With unsupervised learning there is no feedback based on the prediction results.
Predicting Housing Prices requires us to use supervised learning more specifically a regression model because;
Establishing the notation for future usage:
To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h : X → Y
so that h(x)
is a “good” predictor for the corresponding value of y. For historical reasons, this function h is called a hypothesis. Seen pictorially, the process is therefore like this:
When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression problem.
How to fit the best possible line?
We can measure the accuracy of our hypothesis function by using a cost function.
This takes an average difference (actually a fancier version of an average) of all the results of the hypothesis with inputs from x
's and the actual output y
's.
The mean is halved (1/2) as a convenience for the computation of the gradient descent, as the derivative term of the square function will cancel out the 1/2 term.
So we have our hypothesis function and we have a way of measuring how well it fits into the data. Now we need to estimate the parameters in the hypothesis function. That's where gradient descent comes in.
Gradient descent algorithm starts with initial θ0
and θ1
then updates the values try to find the lowest point.
repeat until convergence:
$$ \theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1) $$where j = 0,1 represents the feature index number.
But it's really important to keep that in mind that we have to update
θ0
andθ1
simultaneously.
At each iteration j, one should simultaneously update the parameters θ1,θ2,...,θn
Updating a specific parameter prior to calculating another one on the $j^{(th)}$ iteration would yield to a wrong implementation.
Repeat until converge: $$ \theta_1:=\theta_1-\alpha \frac{d}{d\theta_1} J(\theta_1) $$
Regardless of the slope's sign for $\frac{d}{d\theta_1} J(\theta_1)$ θ1
eventually converges to its minimum value.
On a side note, we should adjust our parameter α
to ensure that the gradient descent algorithm converges in a reasonable time. Failure to converge or too much time to obtain the minimum value imply that our step size is wrong.
After each step, while the point gets closer to the minimum, the derivative's result gets smaller. Without we decreasing the value of α
, the steps are getting smaller.
When specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived. We can substitute our actual cost function and our actual hypothesis function and modify the equation to :
$$ \begin{align*} \text{repeat until convergence: } \lbrace & \newline \theta_0 := & \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}(h_\theta(x_{i}) - y_{i}) \newline \theta_1 := & \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}\left((h_\theta(x_{i}) - y_{i}) x_{i}\right) \newline \rbrace& \end{align*} $$So, this is simply gradient descent on the original cost function J. This method looks at every example in the entire training set on every step, and is called batch gradient descent. Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and no other local, optima; thus gradient descent always converges (assuming the learning rate α is not too large) to the global minimum. Indeed, J is a convex quadratic function. Here is an example of gradient descent as it is run to minimize a quadratic function.
The ellipses shown above are the contours of a quadratic function. Also shown is the trajectory taken by gradient descent, which was initialized at (48,30). The x’s in the figure (joined by straight lines) mark the successive values of θ that gradient descent went through as it converged to its minimum.
In [ ]: