Title of Database: Wall-Following navigation task with mobile robot SCITOS-G5

The data were collected as the SCITOS G5 navigates through the room following the wall in a clockwise direction, for 4 rounds. To navigate, the robot uses 24 ultrasound sensors arranged circularly around its "waist". The numbering of the ultrasound sensors starts at the front of the robot and increases in clockwise direction.


In [59]:
# modules
from keras.layers import Input, Dense, Dropout
from keras.models import Model
from keras.datasets import mnist
from keras.models import Sequential, load_model
from keras.optimizers import RMSprop
from keras.callbacks import TensorBoard
from __future__ import print_function
from keras.utils import plot_model
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from sklearn import preprocessing
from keras import layers
from keras import initializers
from matplotlib import axes
from matplotlib import rc
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix

import keras
import matplotlib.pyplot as plt
import numpy as np
import math
import pydot
import graphviz
import pandas as pd
import IPython
import itertools

In [60]:
%matplotlib inline
font = {'family' : 'monospace',
        'weight' : 'bold',
        'size'   : 20}

rc('font', **font)

In [61]:
def plot_confusion_matrix(cm, classes,
                          normalize=False,
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """
    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
        print("Normalized confusion matrix")
    else:
        print('Confusion matrix, without normalization')

    print(cm)

    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=45)
    plt.yticks(tick_marks, classes)

    fmt = '.2f' if normalize else 'd'
    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, format(cm[i, j], fmt),
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")

    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')

Import and basic data inspection


In [62]:
# import
data_raw = pd.read_csv('data/sensor_readings_24.csv', sep=",", header=None)
data = data_raw.copy()

The dataframe consists of only positive values and the classes are encoded as strings in the variable with index 24


In [63]:
data.head()


Out[63]:
0 1 2 3 4 5 6 7 8 9 ... 15 16 17 18 19 20 21 22 23 24
0 0.438 0.498 3.625 3.645 5.0 2.918 5.0 2.351 2.332 2.643 ... 0.593 0.502 0.493 0.504 0.445 0.431 0.444 0.440 0.429 Slight-Right-Turn
1 0.438 0.498 3.625 3.648 5.0 2.918 5.0 2.637 2.332 2.649 ... 0.592 0.502 0.493 0.504 0.449 0.431 0.444 0.443 0.429 Slight-Right-Turn
2 0.438 0.498 3.625 3.629 5.0 2.918 5.0 2.637 2.334 2.643 ... 0.593 0.502 0.493 0.504 0.449 0.431 0.444 0.446 0.429 Slight-Right-Turn
3 0.437 0.501 3.625 3.626 5.0 2.918 5.0 2.353 2.334 2.642 ... 0.593 0.502 0.493 0.504 0.449 0.431 0.444 0.444 0.429 Slight-Right-Turn
4 0.438 0.498 3.626 3.629 5.0 2.918 5.0 2.640 2.334 2.639 ... 0.592 0.502 0.493 0.504 0.449 0.431 0.444 0.441 0.429 Slight-Right-Turn

5 rows × 25 columns

Whats the distribution of the classes?


In [64]:
df_tab = data_raw
df_tab[24] = df_tab[24].astype('category')
tab = pd.crosstab(index=df_tab[24], columns="frequency")
tab.index.name = 'Class/Direction'
tab/tab.sum()


Out[64]:
col_0 frequency
Class/Direction
Move-Forward 0.404142
Sharp-Right-Turn 0.384348
Slight-Left-Turn 0.060117
Slight-Right-Turn 0.151393

The Move_Forward and the Sharp-Right-Turn Class combine nearly 80% of all observated classes. So it might happen, that the accuracy may still be high with around 75% although most of the features are eliminated.

Preprocessing

0. Mapping integer values to the classes.


In [65]:
mapping = {key: value for (key, value) in zip(data[24].unique(), range(len(data[24].unique())))}
print(mapping)
data.replace({24:mapping}, inplace=True)


{'Slight-Right-Turn': 0, 'Sharp-Right-Turn': 1, 'Move-Forward': 2, 'Slight-Left-Turn': 3}

In [66]:
data[24].unique()


Out[66]:
array([0, 1, 2, 3], dtype=int64)

1. Take a random sample of 90% of the rows from the dataframe. To ensure reproducability the random_state variable is set. The other 10% are placed aside for validation after training. The last column is the class column and is stored in the y variables respectively.


In [67]:
data_train = data.sample(frac=0.9, random_state=42)
data_val = data.drop(data_train.index)

df_x_train = data_train.iloc[:,:-1]
df_y_train = data_train.iloc[:,-1]

df_x_val = data_val.iloc[:,:-1]
df_y_val = data_val.iloc[:,-1]

2. Normalization between 0 and 1


In [68]:
x_train = df_x_train.values
x_train = (x_train - x_train.min()) / (x_train.max() - x_train.min())
y_train = df_y_train.values
y_train_cat = y_train

x_val = df_x_val.values
x_val = (x_val - x_val.min()) / (x_val.max() - x_val.min())
y_val = df_y_val.values
y_eval = y_val

3. Make useful categorical variables out of the single column data by one-hot encoding it.


In [10]:
y_train = keras.utils.to_categorical(y_train, 4)
y_val = keras.utils.to_categorical(y_val, 4)

4. Set Global Parameters


In [11]:
epochsize = 150
batchsize = 24
shuffle = False
dropout = 0.1
num_classes = 4
input_dim = x_train.shape[1]
hidden1_dim = 30
hidden2_dim = 30
class_names = mapping.keys()

Train Neural Net

Due to a tight schedule we will not perform any cross validation. So it might happen that our accuracy estimators lack a little bit in potential of generalization. We shall live with that. Another setup of experiments would be, that we loop over some different dataframes samples up in the preprocessing steps and repeat all the steps below to finally average the results.

The dimension of the hidden layers are set arbitrarily but some runs have shown that 30 is a good number. The input_dim Variable is set to 24 because initially there are 24 features. The aim is to build the best possible neural net.

Optimizer

RMSprop is a mini batch gradient descent algorithm which divides the gradient by a running average of the learning rate. More information: http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf

The weights are initialized by a normal distribution with mean 0 and standard deviation of 0.05.


In [15]:
input_data = Input(shape=(input_dim,), dtype='float32', name='main_input')
hidden_layer1 = Dense(hidden1_dim, activation='relu', input_shape=(input_dim,), kernel_initializer='normal')(input_data)
dropout1 = Dropout(dropout)(hidden_layer1)
hidden_layer2 = Dense(hidden2_dim, activation='relu', input_shape=(input_dim,), kernel_initializer='normal')(dropout1)
dropout2 = Dropout(dropout)(hidden_layer2)
output_layer = Dense(num_classes, activation='softmax', kernel_initializer='normal')(dropout2)

model = Model(inputs=input_data, outputs=output_layer)

model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])

In [13]:
plot_model(model, to_file='images/robo1_nn.png', show_shapes=True, show_layer_names=True)

In [14]:
IPython.display.Image("images/robo1_nn.png")


Out[14]:

In [16]:
model.fit(x_train, y_train, 
          batch_size=batchsize,
          epochs=epochsize,
          verbose=0,
          shuffle=shuffle)
nn_score = model.evaluate(x_val, y_val)[1]
print(nn_score)


546/546 [==============================] - 0s 86us/step
0.934065934066

In [18]:
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_eval, model.predict(x_val).argmax(axis=-1))
np.set_printoptions(precision=2)

In [20]:
# Plot normalized confusion matrix
plt.figure(figsize=(20,10))
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
                      title='Normalized confusion matrix')


Normalized confusion matrix
[[ 0.9   0.03  0.08  0.  ]
 [ 0.    0.95  0.05  0.  ]
 [ 0.01  0.01  0.98  0.  ]
 [ 0.    0.03  0.13  0.85]]

Comparison

The following data is from a paper published in March 2017. You can find that here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5375835/


In [21]:
IPython.display.Image("images/2018-01-25 18_44_01-PubMed Central, Table 2_ Sensors (Basel). 2017 Mar; 17(3)_ 549. Published online.png")


Out[21]:

One can easily see that our results are better. So we go further with that result and check how good our SAW might become.

Stacked Autoencoder

For this dataset we decided to go with a 24-16-8-16-24 architecture.

First layer


In [31]:
input_img = Input(shape=(input_dim,))
encoded1 = Dense(16, activation='relu')(input_img)
decoded1 = Dense(input_dim, activation='relu')(encoded1)
class1 = Dense(num_classes, activation='softmax')(decoded1)

autoencoder1 = Model(input_img, class1)
autoencoder1.compile(optimizer=RMSprop(), loss='binary_crossentropy', metrics=['accuracy'])
encoder1 = Model(input_img, encoded1)
encoder1.compile(optimizer=RMSprop(), loss='binary_crossentropy')

In [32]:
autoencoder1.fit(x_train
                 , y_train
                 , epochs=50
                 , batch_size=24
                 , shuffle=True
                 , verbose=False
                 )


Epoch 1/50
4910/4910 [==============================] - 1s 166us/step - loss: 0.4869 - acc: 0.7612
Epoch 2/50
4910/4910 [==============================] - 0s 74us/step - loss: 0.4142 - acc: 0.8120
Epoch 3/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.3680 - acc: 0.8275
Epoch 4/50
4910/4910 [==============================] - 0s 56us/step - loss: 0.3372 - acc: 0.8480
Epoch 5/50
4910/4910 [==============================] - 0s 57us/step - loss: 0.3180 - acc: 0.8590
Epoch 6/50
4910/4910 [==============================] - 0s 58us/step - loss: 0.3024 - acc: 0.8649
Epoch 7/50
4910/4910 [==============================] - 0s 58us/step - loss: 0.2899 - acc: 0.8716
Epoch 8/50
4910/4910 [==============================] - 0s 55us/step - loss: 0.2784 - acc: 0.8783
Epoch 9/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.2667 - acc: 0.8866
Epoch 10/50
4910/4910 [==============================] - 0s 58us/step - loss: 0.2553 - acc: 0.8941
Epoch 11/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.2442 - acc: 0.8993
Epoch 12/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.2334 - acc: 0.9055
Epoch 13/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.2226 - acc: 0.9099
Epoch 14/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.2139 - acc: 0.9125
Epoch 15/50
4910/4910 [==============================] - 0s 77us/step - loss: 0.2040 - acc: 0.9187
Epoch 16/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.1954 - acc: 0.9226
Epoch 17/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1881 - acc: 0.9259
Epoch 18/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1813 - acc: 0.9298
Epoch 19/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.1747 - acc: 0.9325
Epoch 20/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1695 - acc: 0.9349
Epoch 21/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1638 - acc: 0.9392
Epoch 22/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1590 - acc: 0.9411
Epoch 23/50
4910/4910 [==============================] - 0s 57us/step - loss: 0.1559 - acc: 0.9403
Epoch 24/50
4910/4910 [==============================] - 0s 69us/step - loss: 0.1515 - acc: 0.9432
Epoch 25/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.1479 - acc: 0.9468
Epoch 26/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1452 - acc: 0.9480
Epoch 27/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1429 - acc: 0.9483
Epoch 28/50
4910/4910 [==============================] - 0s 80us/step - loss: 0.1397 - acc: 0.9492
Epoch 29/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1378 - acc: 0.9494
Epoch 30/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1358 - acc: 0.9521
Epoch 31/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1323 - acc: 0.9519
Epoch 32/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1304 - acc: 0.9522
Epoch 33/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1288 - acc: 0.9546
Epoch 34/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1268 - acc: 0.9542
Epoch 35/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1256 - acc: 0.9551
Epoch 36/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1242 - acc: 0.9549
Epoch 37/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.1221 - acc: 0.9551
Epoch 38/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1210 - acc: 0.9577
Epoch 39/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1202 - acc: 0.9569
Epoch 40/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.1170 - acc: 0.9578
Epoch 41/50
4910/4910 [==============================] - 0s 76us/step - loss: 0.1169 - acc: 0.9591
Epoch 42/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1150 - acc: 0.9593
Epoch 43/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.1142 - acc: 0.9584
Epoch 44/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1135 - acc: 0.9588
Epoch 45/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1120 - acc: 0.9605
Epoch 46/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1107 - acc: 0.9605
Epoch 47/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1093 - acc: 0.9629
Epoch 48/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1088 - acc: 0.9620
Epoch 49/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1078 - acc: 0.9609
Epoch 50/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.1065 - acc: 0.9625: 0s - loss: 0.1086 - acc: 0.9
Out[32]:
<keras.callbacks.History at 0x17afa4eb5c0>

In [33]:
score1 = autoencoder1.evaluate(x_val, y_val, verbose=0)
print('Test accuracy:', score1[1])


Test accuracy: 0.95695970696

Second layer


In [34]:
first_layer_code = encoder1.predict(x_train)

encoded_2_input = Input(shape=(16,))
encoded2 = Dense(8, activation='relu')(encoded_2_input)
decoded2 = Dense(16, activation='relu')(encoded2)
class2 = Dense(num_classes, activation='softmax')(decoded2)

autoencoder2 = Model(encoded_2_input, class2)
autoencoder2.compile(optimizer=RMSprop(), loss='binary_crossentropy', metrics=['accuracy'])
encoder2 = Model(encoded_2_input, encoded2)
encoder2.compile(optimizer=RMSprop(), loss='binary_crossentropy')

In [36]:
autoencoder2.fit(first_layer_code
                 , y_train
                 , epochs=50
                 , batch_size=24
                 , shuffle=True
                 , verbose=False
                 )


Epoch 1/50
4910/4910 [==============================] - 1s 174us/step - loss: 0.4633 - acc: 0.7833
Epoch 2/50
4910/4910 [==============================] - 0s 59us/step - loss: 0.3842 - acc: 0.8243
Epoch 3/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.3315 - acc: 0.8479
Epoch 4/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.2868 - acc: 0.8752
Epoch 5/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.2507 - acc: 0.8949
Epoch 6/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.2220 - acc: 0.9100
Epoch 7/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.1979 - acc: 0.9206
Epoch 8/50
4910/4910 [==============================] - 0s 55us/step - loss: 0.1808 - acc: 0.9306
Epoch 9/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1670 - acc: 0.9398
Epoch 10/50
4910/4910 [==============================] - 0s 59us/step - loss: 0.1559 - acc: 0.9440
Epoch 11/50
4910/4910 [==============================] - 0s 77us/step - loss: 0.1474 - acc: 0.9469
Epoch 12/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1405 - acc: 0.9509
Epoch 13/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1353 - acc: 0.9526
Epoch 14/50
4910/4910 [==============================] - 0s 56us/step - loss: 0.1314 - acc: 0.9527
Epoch 15/50
4910/4910 [==============================] - 0s 55us/step - loss: 0.1279 - acc: 0.9552
Epoch 16/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1248 - acc: 0.9549
Epoch 17/50
4910/4910 [==============================] - 0s 56us/step - loss: 0.1236 - acc: 0.9554
Epoch 18/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1217 - acc: 0.9575
Epoch 19/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1200 - acc: 0.9576
Epoch 20/50
4910/4910 [==============================] - 0s 59us/step - loss: 0.1189 - acc: 0.9589
Epoch 21/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.1175 - acc: 0.9580
Epoch 22/50
4910/4910 [==============================] - 0s 59us/step - loss: 0.1167 - acc: 0.9576
Epoch 23/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1165 - acc: 0.9585
Epoch 24/50
4910/4910 [==============================] - 0s 82us/step - loss: 0.1151 - acc: 0.9594: 0s - loss: 0.1120 - acc: 0.
Epoch 25/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1144 - acc: 0.9591
Epoch 26/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1142 - acc: 0.9593
Epoch 27/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1130 - acc: 0.9588
Epoch 28/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1132 - acc: 0.9587
Epoch 29/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1122 - acc: 0.9597
Epoch 30/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1117 - acc: 0.9601
Epoch 31/50
4910/4910 [==============================] - 0s 57us/step - loss: 0.1115 - acc: 0.9607
Epoch 32/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.1113 - acc: 0.9591
Epoch 33/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.1102 - acc: 0.9600
Epoch 34/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1096 - acc: 0.9603
Epoch 35/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1099 - acc: 0.9599
Epoch 36/50
4910/4910 [==============================] - 0s 73us/step - loss: 0.1098 - acc: 0.9615
Epoch 37/50
4910/4910 [==============================] - 0s 74us/step - loss: 0.1093 - acc: 0.9614
Epoch 38/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1092 - acc: 0.9600
Epoch 39/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1089 - acc: 0.9601
Epoch 40/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1086 - acc: 0.9612
Epoch 41/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1084 - acc: 0.9605
Epoch 42/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1076 - acc: 0.9615
Epoch 43/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1079 - acc: 0.9617
Epoch 44/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.1081 - acc: 0.9607
Epoch 45/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1078 - acc: 0.9613
Epoch 46/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1073 - acc: 0.9611
Epoch 47/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1070 - acc: 0.9618
Epoch 48/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1070 - acc: 0.9620
Epoch 49/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1068 - acc: 0.9609
Epoch 50/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1060 - acc: 0.9628
Out[36]:
<keras.callbacks.History at 0x17afc32df28>

In [37]:
first_layer_code_val = encoder1.predict(x_val)

score2 = autoencoder2.evaluate(first_layer_code_val, y_val, verbose=0)
print('Test loss:', score2[0])
print('Test accuracy:', score2[1])


Test loss: 0.138333105124
Test accuracy: 0.958333333333

Data Reconstruction with SAE


In [41]:
sae_encoded1 = Dense(16, activation='relu')(input_img)
sae_encoded2 = Dense(8, activation='relu')(sae_encoded1)
sae_decoded1 = Dense(16, activation='relu')(sae_encoded2)
sae_decoded2 = Dense(24, activation='sigmoid')(sae_decoded1)

sae = Model(input_img, sae_decoded2)

sae.layers[1].set_weights(autoencoder1.layers[1].get_weights())
sae.layers[2].set_weights(autoencoder2.layers[1].get_weights())

sae.compile(loss='binary_crossentropy', optimizer=RMSprop())

In [42]:
sae.fit(x_train
        , x_train
        , epochs=50
        , batch_size=24
        , shuffle=True
        , verbose=False
        )


Epoch 1/50
4910/4910 [==============================] - 1s 217us/step - loss: 0.6127
Epoch 2/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.5476
Epoch 3/50
4910/4910 [==============================] - 0s 59us/step - loss: 0.5317
Epoch 4/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.5212
Epoch 5/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.5132
Epoch 6/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.5075
Epoch 7/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.5028
Epoch 8/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.4982
Epoch 9/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.4944
Epoch 10/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.4918
Epoch 11/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.4901
Epoch 12/50
4910/4910 [==============================] - 0s 81us/step - loss: 0.4888
Epoch 13/50
4910/4910 [==============================] - 0s 69us/step - loss: 0.4877
Epoch 14/50
4910/4910 [==============================] - 0s 72us/step - loss: 0.4867
Epoch 15/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.4859
Epoch 16/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.4852
Epoch 17/50
4910/4910 [==============================] - 0s 73us/step - loss: 0.4846
Epoch 18/50
4910/4910 [==============================] - 0s 69us/step - loss: 0.4840
Epoch 19/50
4910/4910 [==============================] - 0s 72us/step - loss: 0.4835
Epoch 20/50
4910/4910 [==============================] - 0s 72us/step - loss: 0.4829
Epoch 21/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.4824
Epoch 22/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.4819
Epoch 23/50
4910/4910 [==============================] - 0s 70us/step - loss: 0.4814
Epoch 24/50
4910/4910 [==============================] - 0s 86us/step - loss: 0.4810
Epoch 25/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.4805
Epoch 26/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.4801
Epoch 27/50
4910/4910 [==============================] - 0s 74us/step - loss: 0.4797
Epoch 28/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.4792
Epoch 29/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.4787
Epoch 30/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.4782
Epoch 31/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.4777
Epoch 32/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.4773
Epoch 33/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.4768
Epoch 34/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.4764
Epoch 35/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.4760
Epoch 36/50
4910/4910 [==============================] - 0s 87us/step - loss: 0.4755
Epoch 37/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.4751
Epoch 38/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.4746
Epoch 39/50
4910/4910 [==============================] - 0s 69us/step - loss: 0.4741
Epoch 40/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.4736
Epoch 41/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.4732
Epoch 42/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.4728
Epoch 43/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.4724
Epoch 44/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.4721
Epoch 45/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.4717
Epoch 46/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.4714
Epoch 47/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.4710
Epoch 48/50
4910/4910 [==============================] - 0s 83us/step - loss: 0.4708
Epoch 49/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.4705
Epoch 50/50
4910/4910 [==============================] - 0s 70us/step - loss: 0.4703
Out[42]:
<keras.callbacks.History at 0x17afc32de48>

In [43]:
score4 = sae.evaluate(x_val, x_val, verbose=0)
print('Test loss:', score4)


Test loss: 0.46517931236

Classification


In [48]:
input_img = Input(shape=(input_dim,))
sae_classifier_encoded1 = Dense(16, activation='relu')(input_img)
sae_classifier_encoded2 = Dense(8, activation='relu')(sae_classifier_encoded1)
class_layer = Dense(num_classes, activation='softmax')(sae_classifier_encoded2)

sae_classifier = Model(inputs=input_img, outputs=class_layer)

sae_classifier.layers[1].set_weights(autoencoder1.layers[1].get_weights())
sae_classifier.layers[2].set_weights(autoencoder2.layers[1].get_weights())
sae_classifier.compile(loss='binary_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])

In [49]:
sae_classifier.fit(x_train, y_train
               , epochs=50
                , verbose=True
               , batch_size=24
               , shuffle=True)


Epoch 1/50
4910/4910 [==============================] - 1s 200us/step - loss: 0.5124 - acc: 0.7814
Epoch 2/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.3310 - acc: 0.8716
Epoch 3/50
4910/4910 [==============================] - 0s 58us/step - loss: 0.2605 - acc: 0.8938
Epoch 4/50
4910/4910 [==============================] - 0s 72us/step - loss: 0.2229 - acc: 0.9088
Epoch 5/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.2008 - acc: 0.9181
Epoch 6/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1844 - acc: 0.9270
Epoch 7/50
4910/4910 [==============================] - 0s 61us/step - loss: 0.1729 - acc: 0.9321
Epoch 8/50
4910/4910 [==============================] - 0s 63us/step - loss: 0.1634 - acc: 0.9369
Epoch 9/50
4910/4910 [==============================] - 0s 69us/step - loss: 0.1569 - acc: 0.9406
Epoch 10/50
4910/4910 [==============================] - 0s 62us/step - loss: 0.1508 - acc: 0.9446
Epoch 11/50
4910/4910 [==============================] - 0s 60us/step - loss: 0.1455 - acc: 0.9462
Epoch 12/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.1407 - acc: 0.9484
Epoch 13/50
4910/4910 [==============================] - 0s 69us/step - loss: 0.1374 - acc: 0.9505
Epoch 14/50
4910/4910 [==============================] - 0s 89us/step - loss: 0.1344 - acc: 0.9508
Epoch 15/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.1318 - acc: 0.9517
Epoch 16/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.1287 - acc: 0.9535
Epoch 17/50
4910/4910 [==============================] - 0s 67us/step - loss: 0.1270 - acc: 0.9525
Epoch 18/50
4910/4910 [==============================] - 0s 74us/step - loss: 0.1244 - acc: 0.9552
Epoch 19/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.1223 - acc: 0.9545
Epoch 20/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1210 - acc: 0.9563
Epoch 21/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.1195 - acc: 0.9558
Epoch 22/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.1178 - acc: 0.9569
Epoch 23/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.1164 - acc: 0.9564
Epoch 24/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1151 - acc: 0.9573
Epoch 25/50
4910/4910 [==============================] - 0s 77us/step - loss: 0.1141 - acc: 0.9577
Epoch 26/50
4910/4910 [==============================] - 0s 88us/step - loss: 0.1131 - acc: 0.9584
Epoch 27/50
4910/4910 [==============================] - 0s 80us/step - loss: 0.1123 - acc: 0.9589
Epoch 28/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.1114 - acc: 0.9583
Epoch 29/50
4910/4910 [==============================] - 0s 66us/step - loss: 0.1104 - acc: 0.9577
Epoch 30/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.1090 - acc: 0.9596
Epoch 31/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.1081 - acc: 0.9598
Epoch 32/50
4910/4910 [==============================] - 0s 73us/step - loss: 0.1078 - acc: 0.9585
Epoch 33/50
4910/4910 [==============================] - 0s 71us/step - loss: 0.1068 - acc: 0.9604
Epoch 34/50
4910/4910 [==============================] - 0s 71us/step - loss: 0.1067 - acc: 0.9601
Epoch 35/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1054 - acc: 0.9605
Epoch 36/50
4910/4910 [==============================] - 0s 71us/step - loss: 0.1044 - acc: 0.9618
Epoch 37/50
4910/4910 [==============================] - 0s 82us/step - loss: 0.1044 - acc: 0.9613: 0s - loss: 0.1040 - acc: 0.96
Epoch 38/50
4910/4910 [==============================] - 0s 70us/step - loss: 0.1042 - acc: 0.9613
Epoch 39/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.1023 - acc: 0.9625
Epoch 40/50
4910/4910 [==============================] - 0s 71us/step - loss: 0.1025 - acc: 0.9630
Epoch 41/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.1025 - acc: 0.9630
Epoch 42/50
4910/4910 [==============================] - 0s 82us/step - loss: 0.1012 - acc: 0.9633
Epoch 43/50
4910/4910 [==============================] - 0s 72us/step - loss: 0.1017 - acc: 0.9632
Epoch 44/50
4910/4910 [==============================] - 0s 64us/step - loss: 0.1005 - acc: 0.9635
Epoch 45/50
4910/4910 [==============================] - 0s 68us/step - loss: 0.1003 - acc: 0.9625
Epoch 46/50
4910/4910 [==============================] - 0s 70us/step - loss: 0.0990 - acc: 0.9646
Epoch 47/50
4910/4910 [==============================] - 0s 65us/step - loss: 0.0993 - acc: 0.9636
Epoch 48/50
4910/4910 [==============================] - 0s 84us/step - loss: 0.0990 - acc: 0.9635
Epoch 49/50
4910/4910 [==============================] - 0s 76us/step - loss: 0.0989 - acc: 0.9635: 0s - loss: 0.1022 - acc: 0.
Epoch 50/50
4910/4910 [==============================] - 0s 76us/step - loss: 0.0984 - acc: 0.9654
Out[49]:
<keras.callbacks.History at 0x17aff8cdf98>

In [50]:
score5 = sae_classifier.evaluate(x_val, y_val)
print('Test accuracy:', score5[1])


546/546 [==============================] - 0s 438us/step
Test accuracy: 0.962912087912

Plot a two dimensional representation of the data


In [54]:
third_layer_code = encoder2.predict(encoder1.predict(x_train))

encoded_4_input = Input(shape=(8,))
encoded4 = Dense(2, activation='sigmoid')(encoded_4_input)
decoded4 = Dense(8, activation='sigmoid')(encoded4)
class4 = Dense(num_classes, activation='softmax')(decoded4)

autoencoder4 = Model(encoded_4_input, class4)
autoencoder4.compile(optimizer=RMSprop(), loss='binary_crossentropy', metrics=['accuracy'])
encoder4 = Model(encoded_4_input, encoded4)
encoder4.compile(optimizer=RMSprop(), loss='binary_crossentropy')

In [55]:
autoencoder4.fit(third_layer_code
                 , y_train
                 , epochs=100
                 , batch_size=24
                 , shuffle=True
                 , verbose=True
                 )


Epoch 1/100
4910/4910 [==============================] - 1s 217us/step - loss: 0.5224 - acc: 0.7500
Epoch 2/100
4910/4910 [==============================] - 0s 60us/step - loss: 0.4966 - acc: 0.7500
Epoch 3/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.4921 - acc: 0.7500
Epoch 4/100
4910/4910 [==============================] - 0s 86us/step - loss: 0.4846 - acc: 0.7500
Epoch 5/100
4910/4910 [==============================] - 0s 60us/step - loss: 0.4731 - acc: 0.7519
Epoch 6/100
4910/4910 [==============================] - 0s 60us/step - loss: 0.4559 - acc: 0.7762
Epoch 7/100
4910/4910 [==============================] - 0s 61us/step - loss: 0.4336 - acc: 0.8239
Epoch 8/100
4910/4910 [==============================] - 0s 61us/step - loss: 0.4073 - acc: 0.8602
Epoch 9/100
4910/4910 [==============================] - 0s 60us/step - loss: 0.3801 - acc: 0.8728
Epoch 10/100
4910/4910 [==============================] - 0s 63us/step - loss: 0.3553 - acc: 0.8760
Epoch 11/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.3345 - acc: 0.8779
Epoch 12/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.3183 - acc: 0.8777
Epoch 13/100
4910/4910 [==============================] - 0s 75us/step - loss: 0.3054 - acc: 0.8784
Epoch 14/100
4910/4910 [==============================] - 0s 62us/step - loss: 0.2951 - acc: 0.8794
Epoch 15/100
4910/4910 [==============================] - 0s 69us/step - loss: 0.2872 - acc: 0.8801
Epoch 16/100
4910/4910 [==============================] - 0s 74us/step - loss: 0.2804 - acc: 0.8808
Epoch 17/100
4910/4910 [==============================] - 0s 70us/step - loss: 0.2746 - acc: 0.8822
Epoch 18/100
4910/4910 [==============================] - 0s 64us/step - loss: 0.2694 - acc: 0.8832
Epoch 19/100
4910/4910 [==============================] - 0s 62us/step - loss: 0.2647 - acc: 0.8836
Epoch 20/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.2599 - acc: 0.8847
Epoch 21/100
4910/4910 [==============================] - 0s 64us/step - loss: 0.2554 - acc: 0.8849
Epoch 22/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.2513 - acc: 0.8870
Epoch 23/100
4910/4910 [==============================] - 0s 64us/step - loss: 0.2475 - acc: 0.8902
Epoch 24/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.2437 - acc: 0.8939
Epoch 25/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.2404 - acc: 0.8962
Epoch 26/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.2368 - acc: 0.8991
Epoch 27/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.2338 - acc: 0.9008
Epoch 28/100
4910/4910 [==============================] - 0s 78us/step - loss: 0.2307 - acc: 0.9031
Epoch 29/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.2281 - acc: 0.9038
Epoch 30/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.2256 - acc: 0.9049
Epoch 31/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.2234 - acc: 0.9059
Epoch 32/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.2214 - acc: 0.9081
Epoch 33/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.2195 - acc: 0.9085
Epoch 34/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.2178 - acc: 0.9090
Epoch 35/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.2162 - acc: 0.9096
Epoch 36/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.2147 - acc: 0.9108
Epoch 37/100
4910/4910 [==============================] - 0s 63us/step - loss: 0.2134 - acc: 0.9113
Epoch 38/100
4910/4910 [==============================] - 0s 64us/step - loss: 0.2121 - acc: 0.9109
Epoch 39/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.2110 - acc: 0.9124
Epoch 40/100
4910/4910 [==============================] - 0s 76us/step - loss: 0.2095 - acc: 0.9127
Epoch 41/100
4910/4910 [==============================] - 0s 72us/step - loss: 0.2086 - acc: 0.9135
Epoch 42/100
4910/4910 [==============================] - 0s 62us/step - loss: 0.2075 - acc: 0.9141
Epoch 43/100
4910/4910 [==============================] - 0s 63us/step - loss: 0.2065 - acc: 0.9149
Epoch 44/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.2054 - acc: 0.9143
Epoch 45/100
4910/4910 [==============================] - 0s 64us/step - loss: 0.2041 - acc: 0.9152
Epoch 46/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.2033 - acc: 0.9156
Epoch 47/100
4910/4910 [==============================] - 0s 64us/step - loss: 0.2023 - acc: 0.9166
Epoch 48/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.2011 - acc: 0.9161
Epoch 49/100
4910/4910 [==============================] - 0s 68us/step - loss: 0.2004 - acc: 0.9168
Epoch 50/100
4910/4910 [==============================] - 0s 64us/step - loss: 0.1994 - acc: 0.9164
Epoch 51/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.1982 - acc: 0.9166
Epoch 52/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1976 - acc: 0.9174
Epoch 53/100
4910/4910 [==============================] - 0s 79us/step - loss: 0.1965 - acc: 0.9175
Epoch 54/100
4910/4910 [==============================] - 0s 68us/step - loss: 0.1956 - acc: 0.9175
Epoch 55/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.1946 - acc: 0.9186
Epoch 56/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.1937 - acc: 0.9182
Epoch 57/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1928 - acc: 0.9186
Epoch 58/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.1921 - acc: 0.9189
Epoch 59/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.1914 - acc: 0.9193
Epoch 60/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1906 - acc: 0.9195
Epoch 61/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1895 - acc: 0.9200
Epoch 62/100
4910/4910 [==============================] - 0s 68us/step - loss: 0.1892 - acc: 0.9192
Epoch 63/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1885 - acc: 0.9207
Epoch 64/100
4910/4910 [==============================] - 0s 69us/step - loss: 0.1879 - acc: 0.9199
Epoch 65/100
4910/4910 [==============================] - 0s 78us/step - loss: 0.1870 - acc: 0.9204
Epoch 66/100
4910/4910 [==============================] - 0s 73us/step - loss: 0.1863 - acc: 0.9212
Epoch 67/100
4910/4910 [==============================] - 0s 68us/step - loss: 0.1859 - acc: 0.9212
Epoch 68/100
4910/4910 [==============================] - 0s 68us/step - loss: 0.1854 - acc: 0.9223
Epoch 69/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1848 - acc: 0.9214
Epoch 70/100
4910/4910 [==============================] - 0s 66us/step - loss: 0.1844 - acc: 0.9228
Epoch 71/100
4910/4910 [==============================] - 0s 71us/step - loss: 0.1838 - acc: 0.9218
Epoch 72/100
4910/4910 [==============================] - 0s 68us/step - loss: 0.1834 - acc: 0.9228
Epoch 73/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1827 - acc: 0.9231
Epoch 74/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1825 - acc: 0.9236
Epoch 75/100
4910/4910 [==============================] - 0s 69us/step - loss: 0.1819 - acc: 0.9230
Epoch 76/100
4910/4910 [==============================] - 0s 72us/step - loss: 0.1816 - acc: 0.9238
Epoch 77/100
4910/4910 [==============================] - 0s 74us/step - loss: 0.1812 - acc: 0.9242
Epoch 78/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1809 - acc: 0.9241
Epoch 79/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.1803 - acc: 0.9255
Epoch 80/100
4910/4910 [==============================] - 0s 67us/step - loss: 0.1798 - acc: 0.9261
Epoch 81/100
4910/4910 [==============================] - 0s 68us/step - loss: 0.1797 - acc: 0.9255
Epoch 82/100
4910/4910 [==============================] - 0s 65us/step - loss: 0.1792 - acc: 0.9260
Epoch 83/100
4910/4910 [==============================] - 0s 59us/step - loss: 0.1787 - acc: 0.9261
Epoch 84/100
4910/4910 [==============================] - 0s 55us/step - loss: 0.1784 - acc: 0.9266
Epoch 85/100
4910/4910 [==============================] - 0s 53us/step - loss: 0.1782 - acc: 0.9270
Epoch 86/100
4910/4910 [==============================] - 0s 54us/step - loss: 0.1778 - acc: 0.9284
Epoch 87/100
4910/4910 [==============================] - 0s 54us/step - loss: 0.1774 - acc: 0.9271
Epoch 88/100
4910/4910 [==============================] - 0s 55us/step - loss: 0.1772 - acc: 0.9275
Epoch 89/100
4910/4910 [==============================] - 0s 57us/step - loss: 0.1767 - acc: 0.9279
Epoch 90/100
4910/4910 [==============================] - 0s 58us/step - loss: 0.1765 - acc: 0.9283
Epoch 91/100
4910/4910 [==============================] - 0s 56us/step - loss: 0.1760 - acc: 0.9290
Epoch 92/100
4910/4910 [==============================] - 0s 55us/step - loss: 0.1758 - acc: 0.9291
Epoch 93/100
4910/4910 [==============================] - 0s 55us/step - loss: 0.1754 - acc: 0.9273
Epoch 94/100
4910/4910 [==============================] - 0s 55us/step - loss: 0.1751 - acc: 0.9302
Epoch 95/100
4910/4910 [==============================] - 0s 54us/step - loss: 0.1749 - acc: 0.9300
Epoch 96/100
4910/4910 [==============================] - 0s 57us/step - loss: 0.1744 - acc: 0.9301
Epoch 97/100
4910/4910 [==============================] - 0s 57us/step - loss: 0.1743 - acc: 0.9296
Epoch 98/100
4910/4910 [==============================] - 0s 55us/step - loss: 0.1740 - acc: 0.9295
Epoch 99/100
4910/4910 [==============================] - 0s 55us/step - loss: 0.1738 - acc: 0.9288
Epoch 100/100
4910/4910 [==============================] - 0s 64us/step - loss: 0.1733 - acc: 0.9296
Out[55]:
<keras.callbacks.History at 0x17a82decda0>

In [57]:
third_layer_code_val = encoder2.predict(encoder1.predict(x_val))

score4 = autoencoder4.evaluate(third_layer_code_val, y_val, verbose=0)
print('Test loss:', score4[0])
print('Test accuracy:', score4[1])


Test loss: 0.206695011466
Test accuracy: 0.921703296703

In [58]:
fourth_layer_code = encoder4.predict(encoder2.predict(encoder1.predict(x_train)))

In [69]:
value1 = [x[0] for x in fourth_layer_code]
value2 = [x[1] for x in fourth_layer_code]
y_classes = y_train_cat

In [70]:
data = {'value1': value1, 'value2': value2, 'class' : y_classes}
data = pd.DataFrame.from_dict(data)
data.head()


Out[70]:
class value1 value2
0 1 0.085901 0.020280
1 2 0.237249 0.994224
2 2 0.349625 0.737321
3 1 0.606431 0.539148
4 2 0.116137 0.989701

In [71]:
groups = data.groupby('class')

# Plot
fig, ax = plt.subplots(figsize=(20,10))
# plt.figure(figsize=(20,10))
ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling
for name, group in groups:
    ax.plot(group.value1, group.value2, marker='o', linestyle='', ms=3, label=name, alpha=0.7)
ax.legend()

plt.show()