Here we will talk about an important piece of machine learning: the extraction of quantitative features from data. By the end of this section you will
In addition, we will go over several basic tools within scikit-learn which can be used to accomplish the above tasks.
Recall that data in scikit-learn is expected to be in two-dimensional arrays, of size n_samples $\times$ n_features.
Previously, we looked at the iris dataset, which has 150 samples and 4 features
In [ ]:
from sklearn.datasets import load_iris
iris = load_iris()
print(iris.data.shape)
These features are:
Numerical features such as these are pretty straightforward: each sample contains a list of floating-point numbers corresponding to the features
What if you have categorical features? For example, imagine there is data on the color of each iris:
color in [red, blue, purple]
You might be tempted to assign numbers to these features, i.e. red=1, blue=2, purple=3 but in general this is a bad idea. Estimators tend to operate under the assumption that numerical features lie on some continuous scale, so, for example, 1 and 2 are more alike than 1 and 3, and this is often not the case for categorical features.
A better strategy is to give each category its own dimension.
The enriched iris feature set would hence be in this case:
Note that using many of these categorical features may result in data which is better represented as a sparse matrix, as we'll see with the text classification example below.
When the source data is encoded has a list of dicts where the values are either strings names for categories or numerical values, you can use the DictVectorizer
class to compute the boolean expansion of the categorical features while leaving the numerical features unimpacted:
In [ ]:
measurements = [
{'city': 'Dubai', 'temperature': 33.},
{'city': 'London', 'temperature': 12.},
{'city': 'San Francisco', 'temperature': 18.},
]
In [ ]:
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
vec
In [ ]:
vec.fit_transform(measurements).toarray()
In [ ]:
vec.get_feature_names()
Another common feature type are derived features, where some pre-processing step is applied to the data to generate features that are somehow more informative. Derived features may be based in dimensionality reduction (such as PCA or manifold learning), may be linear or nonlinear combinations of features (such as in Polynomial regression), or may be some more sophisticated transform of the features. The latter is often used in image processing.
For example, scikit-image provides a variety of feature
extractors designed for image data: see the skimage.feature
submodule.
We will see some dimensionality-based feature extraction routines later in the tutorial.
As an example of how to work with both categorical and numerical data, we will perform survival predicition for the passengers of the HMS Titanic.
We will use a version of the Titanic (titanic3.xls) dataset from Thomas Cason, as retrieved from Frank Harrell's webpage here. We converted the .xls to .csv for easier manipulation without involving external libraries, but the data is otherwise unchanged.
We need to read in all the lines from the (titanic3.csv) file, set aside the keys from the first line, and find our labels (who survived or died) and data (attributes of that person). Let's look at the keys and some corresponding example lines.
In [ ]:
import os
f = open(os.path.join('datasets', 'titanic', 'titanic3.csv'))
print(f.readline())
lines = []
for i in range(3):
lines.append(f.readline())
print(lines)
The site linked here gives a broad description of the keys and what they mean - we show it here for completeness
pclass Passenger Class
(1 = 1st; 2 = 2nd; 3 = 3rd)
survival Survival
(0 = No; 1 = Yes)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare
cabin Cabin
embarked Port of Embarkation
(C = Cherbourg; Q = Queenstown; S = Southampton)
boat Lifeboat
body Body Identification Number
home.dest Home/Destination
In general, it looks like name
, sex
, cabin
, embarked
, boat
, body
, and homedest
may be candidates for categorical features, while the rest appear to be numerical features. We can now write a function to extract features from a text line, shown below.
Let's process an example line using the process_titanic_line
function from helpers
to see the expected output.
In [ ]:
from helpers import process_titanic_line
print(process_titanic_line(lines[0]))
Now that we see the expected format from the line, we can call a dataset helper which uses this processing to read in the whole dataset. See helpers.py
for more details.
In [ ]:
from helpers import load_titanic
keys, train_data, test_data, train_labels, test_labels = load_titanic(
test_size=0.2, feature_skip_tuple=(), random_state=1999)
print("Key list: %s" % keys)
With all of the hard data loading work out of the way, evaluating a classifier on this data becomes straightforward. Setting up the simplest possible model, we want to see what the simplest score can be with DummyClassifier
.
In [ ]:
from sklearn.metrics import accuracy_score
from sklearn.dummy import DummyClassifier
clf = DummyClassifier('most_frequent')
clf.fit(train_data, train_labels)
pred_labels = clf.predict(test_data)
print("Prediction accuracy: %f" % accuracy_score(pred_labels, test_labels))
Try executing the above classification, using RandomForestClassifier instead of DummyClassifier
Can you remove or create new features to improve your score? Try printing feature importance as shown in this sklearn example and removing features by adding arguments to feature_skip_tuple.
In [ ]: