This notebook illustrates using tensorflow for classifying the iris dataset.
First we load the dataset into a pandas.DataFrame.
In [3]:
import pandas as pd
import numpy as np
from sklearn import datasets
from sklearn.model_selection import KFold, cross_val_score
from ibex.tensorflow.contrib.keras.wrappers.scikit_learn import KerasClassifier as PdKerasClassifier
import tensorflow
import seaborn as sns
sns.set_style('whitegrid')
from ibex.sklearn import ensemble as pd_ensemble
%pylab inline
In [2]:
iris = datasets.load_iris()
features = iris['feature_names']
iris = pd.DataFrame(
np.c_[iris['data'], iris['target']],
columns=features+['class'])
iris.head()
Out[2]:
As usual with tensorflow/keras, We need to write a function building a model.
In [5]:
def buid_nn():
np.random.seed(7)
model = tensorflow.contrib.keras.models.Sequential()
model.add(tensorflow.contrib.keras.layers.Dense(8, input_dim=4, activation='relu'))
model.add(tensorflow.contrib.keras.layers.Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
buid_nn()
Out[5]:
Now we build a PdKerasClassifier. Note the use of classes: we need to list the classes of the dependent variable.
In [6]:
estimator = PdKerasClassifier(
build_fn=buid_nn,
classes=iris['class'].unique(),
epochs=200,
batch_size=5,
verbose=0)
Following sklearn conventsions, following a call to fit, the History object describing the fit is accessible via a history_ attribute.
In [9]:
estimator.fit(iris[features], iris['class']).history_
Out[9]:
In [10]:
kfold = KFold(n_splits=10, shuffle=True)
scores = cross_val_score(estimator, iris[features], iris['class'], cv=kfold)
scores
Out[10]:
In [12]:
sns.boxplot(x=scores, color='grey', orient='v');
ylabel('classification score (mismatch)')
figtext(
0,
-0.1,
'Classification scores for tensorflow/keras on the Iris dataset.');
In [ ]: