In [2]:
using Gadfly ## This package provides 'plot', the plotting function
using Distributions ## This package provides 'Normal', random number generator for gaussian distributions
Linear regression is an approach for modeling the relationship between a scalar dependent variable $y$ and one or more explanatory variables (or independent variables) denoted $X$. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.$^1$ (This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.$^2$
We assume that the equation
$y_i = \beta_0 + \beta_1 X_i + \epsilon_i$ where $\epsilon_i \approx N(0, \sigma^2)$
$^1$ David A. Freedman (2009). Statistical Models: Theory and Practice. Cambridge University Press. p. 26. A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coefficient. A multiple regression equation has two or more explanatory variables on the right hand side, each with its own slope coefficient
$^2$ Rencher, Alvin C.; Christensen, William F. (2012), "Chapter 10, Multivariate regression – Section 10.1, Introduction", Methods of Multivariate Analysis, Wiley Series in Probability and Statistics, 709 (3rd ed.), John Wiley & Sons, p. 19, ISBN 9781118391679.
In [2]:
n = 100 # numeber of samples
Xᵣ = rand(n)*99.0
y = -7.3 + 2.5Xᵣ + rand(Normal(0.0, 27.0), n)
plot(x=Xᵣ, y=y, Geom.point)
Out[2]:
Let's add the bias, i.e. a column of $1$s to the explanatory variables
In [3]:
X = [ones(n) Xᵣ]
Out[3]:
And compute the parametes $\beta_0$ and $\beta_1$ according to $$ \beta = (X^\prime X)^{-1} X^\prime y $$
Note:
This not only looks elegant but can also be written in Julia code. However, matrix inversion $M^{-1}$ requires $O(d^3)$ iterations for a $d\times d$ matrix.
https://www.coursera.org/learn/ml-regression/lecture/jOVX8/discussing-the-closed-form-solution
In [54]:
∧ = &
Out[54]:
In [55]:
true ∧ false
Out[55]:
In [ ]:
In [ ]:
In [4]:
β = inv(X'*X)*X'*y ## inv(M) for M⁻¹
Out[4]:
In [5]:
ŷ = X*β
Out[5]:
In [6]:
plot(
layer(x=X[:,2], y=y, Geom.point),
layer(x=X[:,2], y=ŷ, Geom.line)
)
Out[6]:
In [7]:
n = 100 # numeber of samples
X₁ = rand(n)*99.0
X₂ = rand(n)*51.0 - 26.8
X₃ = rand(n)*5.0 + 6.1
X₄ = rand(n)*1.0 - 0.5
X₅ = rand(n)*300.0
y = -7.3 + 2.5X₁ + -7.9X₂ + 1.5X₃ + 10.0X₄ + 0.13X₅ + rand(Normal(0.0, 27.0), n)
plot(x=y, Geom.histogram(bincount=20))
Out[7]:
In [8]:
X = [ones(n) X₁ X₂ X₃ X₄ X₅]
Out[8]:
In [9]:
β = inv(X'*X)*X'*y
Out[9]:
In [10]:
ŷ = X*β
Out[10]:
The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSD serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSD is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.$^1$
$^1$ Hyndman, Rob J. Koehler, Anne B.; Koehler (2006). "Another look at measures of forecast accuracy". International Journal of Forecasting. 22 (4): 679–688. doi:10.1016/j.ijforecast.2006.03.001.
In [11]:
RSMD = √mean((ŷ-y).^2)
## alternatively
## Σ = sum
## RSMD = √(Σ((ŷ-y).^2)/n)
Out[11]:
Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting.
In general, a regularization term $R(f)$ is introduced to a general loss function:
A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution, as depicted in the figure. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.
Regularization can be used to learn simpler models, induce models to be sparse, introduce group structure into the learning problem, and more.
We're going to add the L2 term $\lambda||\beta||_2^2$ to the regression equation, which yields to$^2$
$$ \beta = (X^\prime X + \lambda I)^{-1} X^\prime y $$$^1$ Bishop, Christopher M. (2007). Pattern recognition and machine learning (Corr. printing. ed.). New York: Springer. ISBN 978-0387310732.
$^2$ http://stats.stackexchange.com/questions/69205/how-to-derive-the-ridge-regression-solution
In [12]:
p = size(X,2) ## get number of parameters
λ = 10.0
Out[12]:
In [13]:
β₂ = inv(X'*X + λ*eye(p))*X'*y
Out[13]:
In [14]:
ŷ₂ = X*β₂
Out[14]:
In [15]:
RSMD₂ = √mean((ŷ₂-y).^2)
Out[15]:
In [3]:
rmse(ŷ, y) = √mean((ŷ-y).^2)
Out[3]:
In [16]:
n = 100 # numeber of samples
X₁ = rand(n)*99.0
X₂ = rand(n)*51.0 - 26.8
X₃ = rand(n)*5.0 + 6.1
X₄ = rand(n)*1.0 - 0.5
X₅ = rand(n)*300.0
X̃ = rand(n)*1.0 - 0.5
X = [ones(n) X₁ X₂ X₃ X₄ X₅]
p = size(X,2)
for σ in [50, 100, 200]
y = -7.3 + 2.5X₁ + -7.9X₂ + 1.5X₃ + 10.0X₄ + 0.13X₅ + 1.0X̃.^3 + rand(Normal(0.0, σ), n)
for λ in [0.0, 2.0, 10.0, 100.0]
β = inv(X'*X + λ*eye(p))*X'*y
ŷ = X*β
R = rmse(ŷ, y)
@printf("σ=%8.3lf\tλ=%8.3lf\tRMSD=%8.3lf\n", σ, λ, R)
end
end
In [52]:
n = 100
σ = 200
λ = 20
X₁ = rand(Normal(0.0, 50), n) ##linspace(-100, 100.0, n)
X = [ones(n) X₁]
p = size(X,2)
y₁ = -7.3 * 2.5X₁ + rand(Normal(0.0, σ), n)
LL = [layer(x=X₁, y=y₁, Geom.point)]
for λ in [0, 20, 100]
β = inv(X'*X + λ*eye(p))*X'*y₁
ŷ₁ = X*β
R = rmse(ŷ₁, y₁)
@printf("λ=%8.3lf\tRMSD=%8.3lf\n", λ, R)
push!(LL, layer(x=X₁, y=ŷ₁, Geom.line))
end
plot(LL...)
Out[52]:
In [ ]: