In this section we study how different estimators maybe be chained.
In [ ]:
from sklearn import datasets
from sklearn.model_selection import train_test_split
digits = datasets.load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
You can apply the dimensionality reduction manually, like so:
In [ ]:
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
pca = PCA(n_components=5)
pca.fit(X_train)
X_train_reduced = pca.transform(X_train)
X_test_reduced = pca.transform(X_test)
logistic = LogisticRegression()
logistic.fit(X_train_reduced, y_train)
logistic.score(X_test_reduced, y_test)
The situation where we learn a transformation and then apply it to the test data is very common in machine learning. Therefore scikit-learn has a shortcut for this, called pipelines:
In [ ]:
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(PCA(n_components=5), LogisticRegression())
pipeline.fit(X_train, y_train)
pipeline.score(X_test, y_test)
As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.
After the first step is fit, it will use the transform
method of the first step to create a new representation.
This will then be fed to the fit
of the next step, and so on.
Finally, on the last step, only fit
is called.
If we call score
, only transform
will be called on each step - this could be the test set after all! Then, on the last step, score
is called with the new representation. The same goes for predict
.
Building pipelines not only simplifies the code, it is also important for model selection. Say we want to grid-search C to tune our Logistic Regression above.
Let's say we do it like this:
In [ ]:
# this illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
pca = PCA(n_components=5)
pca.fit(X_train)
X_train_reduced = pca.transform(X_train)
X_test_reduced = pca.transform(X_test)
logistic = LogisticRegression()
grid = GridSearchCV(logistic, param_grid={'C': [.1, 1, 10, 100, 1000, 10000]}, cv=5)
grid.fit(X_train_reduced, y_train)
print(grid.score(X_test_reduced, y_test))
print(grid.best_params_)
Here, we did grid-search with cross-validation on X_train
. However, when applying PCA
, it saw all of the X_train
,
not only the training folds! So it could use some knowledge about the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.
We can fix this with the pipeline, though:
In [ ]:
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(PCA(n_components=5),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100, 1000, 10000]}, cv=5)
grid.fit(X_train, y_train)
print(grid.score(X_test, y_test))
print(grid.best_params_)
Note that we need to tell the pipeline where at which step we wanted to set the parameter C
.
We can do this using the special __
syntax. The name before the __
is simply the name of the class, the part after __
is the parameter we want to set with grid-search.
Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with GridSearchCV
:
In [ ]:
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(PCA(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"pca__n_components": [5, 10, 20, 40]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(X_train, y_train)
print(grid.best_params_)
grid.score(X_test, y_test)
Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using sklearn.datasets.load_boston
). Try adding the sklearn.preprocessing.PolynomialFeatures
transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
In [ ]: