Basic autoencoder

All credits to: https://blog.keras.io/building-autoencoders-in-keras.html. The following code is a mere rearrangement of the code from the great tutorial above.


In [1]:
from keras.layers import Input, Dense
from keras.models import Model


Using Theano backend.
WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be removed in the next release (v0.10).  Please switch to the gpuarray backend. You can get more information about how to switch at this URL:
 https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29

Using gpu device 0: GeForce GTX 760 (CNMeM is enabled with initial size: 40.0% of memory, cuDNN 5110)

In [2]:
# this is the size of our encoded representations
encoding_dim = 32  # 32 floats -> compression of factor 24.5, assuming the input is 784 floats

# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input given the encoded representation
decoded = Dense(784, activation='sigmoid')(encoded)

# this model maps an input to its reconstruction     (whole thing)
autoencoder = Model(input_img, decoded)

In [3]:
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

In [4]:
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data() #y_train and y_test not needed! 

num_pixels = x_train.shape[1] * x_train.shape[2]
x_train = x_train.reshape((len(x_train), num_pixels))
x_test = x_test.reshape((len(x_test), num_pixels))

#Normalize
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

In [5]:
autoencoder.fit(x_train, x_train,
                epochs=50,
                batch_size=256,
                shuffle=True,
                validation_data=(x_test, x_test))


WARNING (theano.configdefaults): install mkl with `conda install mkl-service`: No module named 'mkl'
Train on 60000 samples, validate on 10000 samples
Epoch 1/50
60000/60000 [==============================] - 0s - loss: 0.3793 - val_loss: 0.2728
Epoch 2/50
60000/60000 [==============================] - 0s - loss: 0.2660 - val_loss: 0.2559
Epoch 3/50
60000/60000 [==============================] - 0s - loss: 0.2465 - val_loss: 0.2344
Epoch 4/50
60000/60000 [==============================] - 0s - loss: 0.2271 - val_loss: 0.2168
Epoch 5/50
60000/60000 [==============================] - 0s - loss: 0.2117 - val_loss: 0.2035
Epoch 6/50
60000/60000 [==============================] - 0s - loss: 0.2002 - val_loss: 0.1938
Epoch 7/50
60000/60000 [==============================] - 0s - loss: 0.1916 - val_loss: 0.1862
Epoch 8/50
60000/60000 [==============================] - 0s - loss: 0.1845 - val_loss: 0.1797
Epoch 9/50
60000/60000 [==============================] - 0s - loss: 0.1784 - val_loss: 0.1742
Epoch 10/50
60000/60000 [==============================] - 0s - loss: 0.1731 - val_loss: 0.1692
Epoch 11/50
60000/60000 [==============================] - 0s - loss: 0.1685 - val_loss: 0.1649
Epoch 12/50
60000/60000 [==============================] - 0s - loss: 0.1644 - val_loss: 0.1609
Epoch 13/50
60000/60000 [==============================] - 0s - loss: 0.1606 - val_loss: 0.1573
Epoch 14/50
60000/60000 [==============================] - 0s - loss: 0.1572 - val_loss: 0.1540
Epoch 15/50
60000/60000 [==============================] - 0s - loss: 0.1539 - val_loss: 0.1509
Epoch 16/50
60000/60000 [==============================] - 0s - loss: 0.1508 - val_loss: 0.1478
Epoch 17/50
60000/60000 [==============================] - 0s - loss: 0.1479 - val_loss: 0.1450
Epoch 18/50
60000/60000 [==============================] - 0s - loss: 0.1451 - val_loss: 0.1423
Epoch 19/50
60000/60000 [==============================] - 0s - loss: 0.1426 - val_loss: 0.1399
Epoch 20/50
60000/60000 [==============================] - 0s - loss: 0.1402 - val_loss: 0.1375
Epoch 21/50
60000/60000 [==============================] - 0s - loss: 0.1379 - val_loss: 0.1353
Epoch 22/50
60000/60000 [==============================] - 0s - loss: 0.1358 - val_loss: 0.1332
Epoch 23/50
60000/60000 [==============================] - 0s - loss: 0.1338 - val_loss: 0.1313
Epoch 24/50
60000/60000 [==============================] - 0s - loss: 0.1320 - val_loss: 0.1296
Epoch 25/50
60000/60000 [==============================] - 0s - loss: 0.1302 - val_loss: 0.1278
Epoch 26/50
60000/60000 [==============================] - 0s - loss: 0.1285 - val_loss: 0.1262
Epoch 27/50
60000/60000 [==============================] - 0s - loss: 0.1269 - val_loss: 0.1246
Epoch 28/50
60000/60000 [==============================] - 0s - loss: 0.1254 - val_loss: 0.1231
Epoch 29/50
60000/60000 [==============================] - 0s - loss: 0.1239 - val_loss: 0.1216
Epoch 30/50
60000/60000 [==============================] - 0s - loss: 0.1225 - val_loss: 0.1202
Epoch 31/50
60000/60000 [==============================] - 0s - loss: 0.1211 - val_loss: 0.1189
Epoch 32/50
60000/60000 [==============================] - 0s - loss: 0.1198 - val_loss: 0.1176
Epoch 33/50
60000/60000 [==============================] - 0s - loss: 0.1186 - val_loss: 0.1164
Epoch 34/50
60000/60000 [==============================] - 0s - loss: 0.1174 - val_loss: 0.1152
Epoch 35/50
60000/60000 [==============================] - 0s - loss: 0.1162 - val_loss: 0.1141
Epoch 36/50
60000/60000 [==============================] - 0s - loss: 0.1152 - val_loss: 0.1131
Epoch 37/50
60000/60000 [==============================] - 0s - loss: 0.1142 - val_loss: 0.1121
Epoch 38/50
60000/60000 [==============================] - 0s - loss: 0.1132 - val_loss: 0.1111
Epoch 39/50
60000/60000 [==============================] - 0s - loss: 0.1123 - val_loss: 0.1102
Epoch 40/50
60000/60000 [==============================] - 0s - loss: 0.1114 - val_loss: 0.1094
Epoch 41/50
60000/60000 [==============================] - 0s - loss: 0.1106 - val_loss: 0.1086
Epoch 42/50
60000/60000 [==============================] - 0s - loss: 0.1099 - val_loss: 0.1079
Epoch 43/50
60000/60000 [==============================] - 0s - loss: 0.1092 - val_loss: 0.1072
Epoch 44/50
60000/60000 [==============================] - 0s - loss: 0.1085 - val_loss: 0.1066
Epoch 45/50
60000/60000 [==============================] - 0s - loss: 0.1079 - val_loss: 0.1060
Epoch 46/50
60000/60000 [==============================] - 0s - loss: 0.1073 - val_loss: 0.1054
Epoch 47/50
60000/60000 [==============================] - 0s - loss: 0.1067 - val_loss: 0.1049
Epoch 48/50
60000/60000 [==============================] - 0s - loss: 0.1062 - val_loss: 0.1043
Epoch 49/50
60000/60000 [==============================] - 0s - loss: 0.1057 - val_loss: 0.1039
Epoch 50/50
60000/60000 [==============================] - 0s - loss: 0.1053 - val_loss: 0.1034
Out[5]:
<keras.callbacks.History at 0x17a39576f60>

In [6]:
decoded_imgs = autoencoder.predict(x_test)

In [7]:
import matplotlib.pyplot as plt

n = 10  # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
    which = np.random.randint(1, len(x_test[0]))
    
    # display original
    ax = plt.subplot(2, n, i + 1)
    plt.imshow(x_test[which].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    # display reconstruction
    ax = plt.subplot(2, n, i + 1 + n)
    plt.imshow(decoded_imgs[which].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)
plt.show()