In [1]:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p pandas,numpy,scikit-learn -d
Features can come in various different flavors. Typically we distinguish between
And the categorical features can be categorized further into:
Now, most implementations of machine learning algorithms require numerical data as input, and we have to prepare our data accordingly. This notebook contains some useful tips for how to encode categorical features using Python pandas and scikit-learn.
First, let us create a simple example dataset with 3 different kinds of features:
In [2]:
import pandas as pd
df = pd.DataFrame([
['green', 'M', 10.1, 'class1'],
['red', 'L', 13.5, 'class2'],
['blue', 'XL', 15.3, 'class1']])
df.columns = ['color', 'size', 'prize', 'class label']
df
Out[2]:
"Typical" machine learning algorithms handle class labels with "no order implied" - unless we use a ranking classifier (e.g., SVM-rank). Thus, it is save to use a simple set-item-enumeration to convert the class labels from a string representation into integers.
In [3]:
class_mapping = {label:idx for idx,label in enumerate(set(df['class label']))}
df['class label'] = df['class label'].map(class_mapping)
df
Out[3]:
Ordinal features need special attention: We have to make sure that the correct values are associated with the corresponding strings. Thus, we need to set-up an explicit mapping dictionary:
In [4]:
size_mapping = {
'XL': 3,
'L': 2,
'M': 1}
df['size'] = df['size'].map(size_mapping)
df
Out[4]:
Unfortunately, we can't simply apply the same mapping scheme to the color column that we used for the size-mapping above. However, we can use another simply trick and convert the "colors" into binary features: Each possible color value becomes a feature column itself (with values 1 or 0).
In [5]:
color_mapping = {
'green': (0,0,1),
'red': (0,1,0),
'blue': (1,0,0)}
df['color'] = df['color'].map(color_mapping)
df
Out[5]:
In [6]:
import numpy as np
y = df['class label'].values
X = df.iloc[:, :-1].values
X = np.apply_along_axis(func1d= lambda x: np.array(list(x[0]) + list(x[1:])), axis=1, arr=X)
print('Class labels:', y)
print('\nFeatures:\n', X)
If we want to convert the features back into its original representation, we can simply do so my using inverted mapping dictionaries:
In [7]:
inv_color_mapping = {v: k for k, v in color_mapping.items()}
inv_size_mapping = {v: k for k, v in size_mapping.items()}
inv_class_mapping = {v: k for k, v in class_mapping.items()}
df['color'] = df['color'].map(inv_color_mapping)
df['size'] = df['size'].map(inv_size_mapping)
df['class label'] = df['class label'].map(inv_class_mapping)
df
Out[7]:
The scikit-learn maching library comes with many useful preprocessing functions that we can use for our convenience.
In [8]:
from sklearn.preprocessing import LabelEncoder
class_le = LabelEncoder()
df['class label'] = class_le.fit_transform(df['class label'])
size_mapping = {
'XL': 3,
'L': 2,
'M': 1}
df['size'] = df['size'].map(size_mapping)
df
Out[8]:
The class labels can be converted back from integer to string via the inverse_transform method:
In [9]:
class_le.inverse_transform(df['class label'])
Out[9]:
The DictVectorizer is another handy tool for feature extraction. The DictVectorizer takes a list of dictionary entries (feature-value mappings) and transforms it to vectors. The expected input looks like this:
In [10]:
df.transpose().to_dict().values()
Out[10]:
Note that the dictionary keys in each row represent the feature column labels.
Now, we can use the DictVectorizer to turn this
mapping into a matrix:
In [11]:
from sklearn.feature_extraction import DictVectorizer
dvec = DictVectorizer(sparse=False)
X = dvec.fit_transform(df.transpose().to_dict().values())
X
Out[11]:
As we can see in the array above, the columns were reordered during the conversion (due to the hash mapping when we used the dictionary). However, we can simply add back the column names via the get_feature_names function.
In [12]:
pd.DataFrame(X, columns=dvec.get_feature_names())
Out[12]:
Another useful tool in scikit-learn is the OneHotEncoder. The idea is the same as in the DictVectorizer example above; the only difference is that the OneHotEncoder takes integer columns as input. Here we are an LabelEncoder, we use the LabelEncoder first, to prepare the color column before we use the OneHotEncoder.
In [13]:
color_le = LabelEncoder()
df['color'] = color_le.fit_transform(df['color'])
df
Out[13]:
In [14]:
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(sparse=False)
X = ohe.fit_transform(df[['color']].values)
X
Out[14]:
Also, pandas comes with a convenience function to create new categories for nominal features, namely: get_dummies.
But first, let us quickly regenerate a fresh example DataFrame where the size and class label columns are already taken care of.
In [20]:
import pandas as pd
df = pd.DataFrame([
['green', 'M', 10.1, 'class1'],
['red', 'L', 13.5, 'class2'],
['blue', 'XL', 15.3, 'class1']])
df.columns = ['color', 'size', 'prize', 'class label']
size_mapping = {
'XL': 3,
'L': 2,
'M': 1}
df['size'] = df['size'].map(size_mapping)
class_mapping = {label:idx for idx,label in enumerate(set(df['class label']))}
df['class label'] = df['class label'].map(class_mapping)
df
Out[20]:
Using the get_dummies will create a new column for every unique string in a certain column:
In [21]:
pd.get_dummies(df)
Out[21]:
Note that the get_dummies function leaves the numeric columns untouched, how convenient!