Regression problems generally deal with estimating a set of unknown parameters ($\beta$) from a set of known "independent" variables $X$ such that an arbitrary function $f(X,\beta)$ (the model) can approximate a set of "dependent" variables $Y$, or $f(X,\beta)\approx Y$.
Many algorithms exists depending on $f$ being linear of non-linear, or the particularities of X and Y datasets.
Diabetes dataset
Most ML algorithm classes are improved to fit general datasets. Some famous standard datasets exists, such as the diabetes datasets. While many things can be learned from it, it is used especially as an example for simple regression.
Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline.
Linear regression
In the example below, a linear regression model is used to predict diabete. Which feature does better at predicting diabetes?
In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
diabetes = datasets.load_diabetes()
#diabetes
print diabetes.data.shape, diabetes.target.shape
print diabetes.data[:5,:3]
ft = 2 # feature type 0 - age, 1 - sex, 2 - bmi
X = diabetes.data[:, np.newaxis] #expand the new
print "Expansion:", diabetes.data.shape, X.shape
print X[0]
print X[0, 0, ft]
Xt = X[:, :, ft]
print Xt.shape
print Xt[:5]
Crossvalidation
Crosvalidation is a general validation method that uses a part of the dataset for training (training data) a model while the remaining rest is used to estimate how efective the model is at predicting (test data). This will split X and Y into four groups, training X, Y and testing X, Y. Usually several crossvalidation tests are done by randomly picking training variables and the training set is by convenience at 80% of the whole data.
Goodness of fit
Task
Modify the script to perform crossvalidation on 100 randomly picked training sets. Design a crosvalidation function of your own and afterwards look for the scikit-learn's own function and use it instead. What is KFold crossvalidation?
In [24]:
#Split the data into training/testing sets
X_train = Xt[:-20]
X_test = Xt[-20:]
#Split the targets into training/testing sets
y_train = diabetes.target[:-20]
y_test = diabetes.target[-20:]
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
y_predict = regr.predict(X_test)
score = regr.score(X_test, y_test)
print 'Betas (regression coefficients): \n', regr.coef_
print("Mean square error: %.2f" % np.mean((y_predict - y_test) ** 2))
print('Variance score: %.2f' % score)
plt.scatter(X_test, y_test, color='black')
plt.plot(X_test, y_predict, color='blue', linewidth=3)
Out[24]:
In [29]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
diabetes = datasets.load_diabetes()
X = diabetes.data
#Split the data into training/testing sets
X_train = X[:-20]
X_test = X[-20:]
#Split the targets into training/testing sets
y_train = diabetes.target[:-20]
y_test = diabetes.target[-20:]
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
y_predict = regr.predict(X_test)
score = regr.score(X_test, y_test)
print('Coefficients: \n', regr.coef_)
print("Mean square error: %.2f"% np.mean((y_predict - y_test) ** 2))
print('Variance score: %.2f' % score)
Multivariate case, a PLS Regression example
So far our regression target was unidimensional. The hardcore regression however involves multiple target vectors, also called multiple regression. One can still use linear regression techniques, with different corrections that take into account for many exceptions, one such being that the X matrix must be full rank in order for the least square optimization (the engine of linear regression) to work. There are also many other techniques, one my favorites being partial least least squares regression (PLS-R), which I choose to exemplify.
PLS is also called projection to latent structures, which mignt be more apropriate, since the multiple regression problem
$Y = XB + E$
, is solved by projecting the X and Y matrices into latent (hidden) lower dimensional space that is describing them. Just as with least square fitting ,it is difficult to explain how this is done, and it requires knowledge of factor analysis. The end result is a decomposition into a product of score and loading matrices like this:
$X = T P^{\top} + E$
$Y = U Q^{\top} + F$
, from which the coeficient matrix B is estimated.
Multivariate case, a PLS Regression example
So far our regression target was unidimensional. The hardcore regression however involves multiple target vectors, also called multiple regression. One can still use linear regression techniques, with different corrections that take into account for many exceptions, one such being that the X matrix must be full rank in order for the least square optimization (the engine of linear regression) to work. There are also many other techniques, one my favorites being partial least least squares regression (PLS-R), which I choose to exemplify.
PLS is also called projection to latent structures, which mignt be more apropriate, since the multiple regression problem
$Y = XB + E$
, is solved by projecting the X and Y matrices into latent (hidden) lower dimensional space that is describing them. Just as with least square fitting ,it is difficult to explain how this is done, and it requires knowledge of factor analysis. The end result is a decomposition into a product of score and loading matrices like this:
$X = T P^{\top} + E$
$Y = U Q^{\top} + F$
, from which the coeficient matrix B is estimated.
In [5]:
from sklearn.cross_decomposition import PLSRegression
n = 1000
q = 3
p = 10
X = np.random.normal(size=n * p).reshape((n, p))
B = np.array([[1, 2] + [0] * (p - 2)] * q).T
# each Yj = 1*X1 + 2*X2 + noize
#B = np.array([[1, 0.5, 2, 4] + [0] * (p - 4)] * q).T
Y = np.dot(X, B) + np.random.normal(size=n * q).reshape((n, q)) + 5
pls2 = PLSRegression(n_components=2)
pls2.fit(X, Y)
print("True B (such that: Y = XB + Err)")
print(B)
# compare pls2.coefs with B
print("Estimated B")
print(np.round(pls2.coefs, 1))
Yp = pls2.predict(X)
from sklearn.metrics import r2_score
print "R2",r2_score(Y, Yp)
T, U = pls2.transform(X, Y) #Apply the dimension reduction learned on the training data
## Notice that the first component is usually well correlated with all the columns in X
cp = 0# 0 - first component, 1 -second component, etc
xc = 0# X matrix columns
import matplotlib.pyplot as plt
#plt.title(title)
plt.plot(T[:, cp], X[:,1], "ob")
#plt.plot(T[:, cp], X, "ob")
plt.title("T vs X, component "+str(cp+1))
plt.ylabel('matrix')
plt.xlabel('latent matrix scores')
Out[5]:
In [ ]: