Convolutional Autoencoder

Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.


In [1]:
%matplotlib inline

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

In [2]:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)


Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

In [3]:
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')


Out[3]:
<matplotlib.image.AxesImage at 0x7faf8484b4e0>

Network Architecture

The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.

Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.

What's going on with the decoder

Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.

However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.

Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. For convolutional layers, use tf.layers.conv2d. For example, you would write conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu) for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use tf.layers.max_pooling2d.


In [4]:
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')

### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8

### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16

logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1

decoded = tf.nn.sigmoid(logits, name='decoded')

loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)

Training

As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.


In [5]:
sess = tf.Session()

In [ ]:
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
    for ii in range(mnist.train.num_examples//batch_size):
        batch = mnist.train.next_batch(batch_size)
        imgs = batch[0].reshape((-1, 28, 28, 1))
        batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
                                                         targets_: imgs})

        print("Epoch: {}/{}...".format(e+1, epochs),
              "Training loss: {:.4f}".format(batch_cost))


Epoch: 1/20... Training loss: 0.6966
Epoch: 1/20... Training loss: 0.6931
Epoch: 1/20... Training loss: 0.6909
Epoch: 1/20... Training loss: 0.6891
Epoch: 1/20... Training loss: 0.6874
Epoch: 1/20... Training loss: 0.6855
Epoch: 1/20... Training loss: 0.6835
Epoch: 1/20... Training loss: 0.6811
Epoch: 1/20... Training loss: 0.6782
Epoch: 1/20... Training loss: 0.6749
Epoch: 1/20... Training loss: 0.6711
Epoch: 1/20... Training loss: 0.6666
Epoch: 1/20... Training loss: 0.6613
Epoch: 1/20... Training loss: 0.6542
Epoch: 1/20... Training loss: 0.6462
Epoch: 1/20... Training loss: 0.6364
Epoch: 1/20... Training loss: 0.6249
Epoch: 1/20... Training loss: 0.6111
Epoch: 1/20... Training loss: 0.5971
Epoch: 1/20... Training loss: 0.5780
Epoch: 1/20... Training loss: 0.5590
Epoch: 1/20... Training loss: 0.5389
Epoch: 1/20... Training loss: 0.5168
Epoch: 1/20... Training loss: 0.5035
Epoch: 1/20... Training loss: 0.4989
Epoch: 1/20... Training loss: 0.4913
Epoch: 1/20... Training loss: 0.4989
Epoch: 1/20... Training loss: 0.4880
Epoch: 1/20... Training loss: 0.4757
Epoch: 1/20... Training loss: 0.4817
Epoch: 1/20... Training loss: 0.4510
Epoch: 1/20... Training loss: 0.4460
Epoch: 1/20... Training loss: 0.4204
Epoch: 1/20... Training loss: 0.4262
Epoch: 1/20... Training loss: 0.4155
Epoch: 1/20... Training loss: 0.4021
Epoch: 1/20... Training loss: 0.3922
Epoch: 1/20... Training loss: 0.3860
Epoch: 1/20... Training loss: 0.3723
Epoch: 1/20... Training loss: 0.3625
Epoch: 1/20... Training loss: 0.3560
Epoch: 1/20... Training loss: 0.3422
Epoch: 1/20... Training loss: 0.3352
Epoch: 1/20... Training loss: 0.3286
Epoch: 1/20... Training loss: 0.3168
Epoch: 1/20... Training loss: 0.2953
Epoch: 1/20... Training loss: 0.3087
Epoch: 1/20... Training loss: 0.2862
Epoch: 1/20... Training loss: 0.2786
Epoch: 1/20... Training loss: 0.2777
Epoch: 1/20... Training loss: 0.2707
Epoch: 1/20... Training loss: 0.2704
Epoch: 1/20... Training loss: 0.2556
Epoch: 1/20... Training loss: 0.2715
Epoch: 1/20... Training loss: 0.2516
Epoch: 1/20... Training loss: 0.2571
Epoch: 1/20... Training loss: 0.2573
Epoch: 1/20... Training loss: 0.2508
Epoch: 1/20... Training loss: 0.2554
Epoch: 1/20... Training loss: 0.2439
Epoch: 1/20... Training loss: 0.2467
Epoch: 1/20... Training loss: 0.2500
Epoch: 1/20... Training loss: 0.2433
Epoch: 1/20... Training loss: 0.2401
Epoch: 1/20... Training loss: 0.2467
Epoch: 1/20... Training loss: 0.2438
Epoch: 1/20... Training loss: 0.2445
Epoch: 1/20... Training loss: 0.2486
Epoch: 1/20... Training loss: 0.2410
Epoch: 1/20... Training loss: 0.2363
Epoch: 1/20... Training loss: 0.2365
Epoch: 1/20... Training loss: 0.2315
Epoch: 1/20... Training loss: 0.2333
Epoch: 1/20... Training loss: 0.2300
Epoch: 1/20... Training loss: 0.2290
Epoch: 1/20... Training loss: 0.2222
Epoch: 1/20... Training loss: 0.2246
Epoch: 1/20... Training loss: 0.2322
Epoch: 1/20... Training loss: 0.2303
Epoch: 1/20... Training loss: 0.2190
Epoch: 1/20... Training loss: 0.2230
Epoch: 1/20... Training loss: 0.2346
Epoch: 1/20... Training loss: 0.2159
Epoch: 1/20... Training loss: 0.2255
Epoch: 1/20... Training loss: 0.2219
Epoch: 1/20... Training loss: 0.2274
Epoch: 1/20... Training loss: 0.2174
Epoch: 1/20... Training loss: 0.2184
Epoch: 1/20... Training loss: 0.2192
Epoch: 1/20... Training loss: 0.2208
Epoch: 1/20... Training loss: 0.2153
Epoch: 1/20... Training loss: 0.2128
Epoch: 1/20... Training loss: 0.2196
Epoch: 1/20... Training loss: 0.2112
Epoch: 1/20... Training loss: 0.2160
Epoch: 1/20... Training loss: 0.2110
Epoch: 1/20... Training loss: 0.2069
Epoch: 1/20... Training loss: 0.2097
Epoch: 1/20... Training loss: 0.2128
Epoch: 1/20... Training loss: 0.2079
Epoch: 1/20... Training loss: 0.2061
Epoch: 1/20... Training loss: 0.2115
Epoch: 1/20... Training loss: 0.2068
Epoch: 1/20... Training loss: 0.2094
Epoch: 1/20... Training loss: 0.2145
Epoch: 1/20... Training loss: 0.2110
Epoch: 1/20... Training loss: 0.2063
Epoch: 1/20... Training loss: 0.2070
Epoch: 1/20... Training loss: 0.2034
Epoch: 1/20... Training loss: 0.2093
Epoch: 1/20... Training loss: 0.2001
Epoch: 1/20... Training loss: 0.2101
Epoch: 1/20... Training loss: 0.1997
Epoch: 1/20... Training loss: 0.2105
Epoch: 1/20... Training loss: 0.2049
Epoch: 1/20... Training loss: 0.1985
Epoch: 1/20... Training loss: 0.2079
Epoch: 1/20... Training loss: 0.2012
Epoch: 1/20... Training loss: 0.2060
Epoch: 1/20... Training loss: 0.1997
Epoch: 1/20... Training loss: 0.1996
Epoch: 1/20... Training loss: 0.2048
Epoch: 1/20... Training loss: 0.1997
Epoch: 1/20... Training loss: 0.1973
Epoch: 1/20... Training loss: 0.2018
Epoch: 1/20... Training loss: 0.2023
Epoch: 1/20... Training loss: 0.2003
Epoch: 1/20... Training loss: 0.2000
Epoch: 1/20... Training loss: 0.1957
Epoch: 1/20... Training loss: 0.1992
Epoch: 1/20... Training loss: 0.1948
Epoch: 1/20... Training loss: 0.1944
Epoch: 1/20... Training loss: 0.1927
Epoch: 1/20... Training loss: 0.1951
Epoch: 1/20... Training loss: 0.1984
Epoch: 1/20... Training loss: 0.1906
Epoch: 1/20... Training loss: 0.2035
Epoch: 1/20... Training loss: 0.1944
Epoch: 1/20... Training loss: 0.1943
Epoch: 1/20... Training loss: 0.1950
Epoch: 1/20... Training loss: 0.1955
Epoch: 1/20... Training loss: 0.1947
Epoch: 1/20... Training loss: 0.1976
Epoch: 1/20... Training loss: 0.1908
Epoch: 1/20... Training loss: 0.1935
Epoch: 1/20... Training loss: 0.1938
Epoch: 1/20... Training loss: 0.1914
Epoch: 1/20... Training loss: 0.1940
Epoch: 1/20... Training loss: 0.1952
Epoch: 1/20... Training loss: 0.1904
Epoch: 1/20... Training loss: 0.1954
Epoch: 1/20... Training loss: 0.1886
Epoch: 1/20... Training loss: 0.1857
Epoch: 1/20... Training loss: 0.1906
Epoch: 1/20... Training loss: 0.1879
Epoch: 1/20... Training loss: 0.1923
Epoch: 1/20... Training loss: 0.1912
Epoch: 1/20... Training loss: 0.1921
Epoch: 1/20... Training loss: 0.1864
Epoch: 1/20... Training loss: 0.1827
Epoch: 1/20... Training loss: 0.1863
Epoch: 1/20... Training loss: 0.1906
Epoch: 1/20... Training loss: 0.1827
Epoch: 1/20... Training loss: 0.1865
Epoch: 1/20... Training loss: 0.1835
Epoch: 1/20... Training loss: 0.1865
Epoch: 1/20... Training loss: 0.1816
Epoch: 1/20... Training loss: 0.1801
Epoch: 1/20... Training loss: 0.1793
Epoch: 1/20... Training loss: 0.1872
Epoch: 1/20... Training loss: 0.1867
Epoch: 1/20... Training loss: 0.1819
Epoch: 1/20... Training loss: 0.1838
Epoch: 1/20... Training loss: 0.1781
Epoch: 1/20... Training loss: 0.1829
Epoch: 1/20... Training loss: 0.1833
Epoch: 1/20... Training loss: 0.1805
Epoch: 1/20... Training loss: 0.1781
Epoch: 1/20... Training loss: 0.1800
Epoch: 1/20... Training loss: 0.1770
Epoch: 1/20... Training loss: 0.1751
Epoch: 1/20... Training loss: 0.1682
Epoch: 1/20... Training loss: 0.1744
Epoch: 1/20... Training loss: 0.1736
Epoch: 1/20... Training loss: 0.1788
Epoch: 1/20... Training loss: 0.1811
Epoch: 1/20... Training loss: 0.1794
Epoch: 1/20... Training loss: 0.1795
Epoch: 1/20... Training loss: 0.1757
Epoch: 1/20... Training loss: 0.1795
Epoch: 1/20... Training loss: 0.1784
Epoch: 1/20... Training loss: 0.1762
Epoch: 1/20... Training loss: 0.1714
Epoch: 1/20... Training loss: 0.1731
Epoch: 1/20... Training loss: 0.1825
Epoch: 1/20... Training loss: 0.1722
Epoch: 1/20... Training loss: 0.1780
Epoch: 1/20... Training loss: 0.1735
Epoch: 1/20... Training loss: 0.1689
Epoch: 1/20... Training loss: 0.1755
Epoch: 1/20... Training loss: 0.1717
Epoch: 1/20... Training loss: 0.1704
Epoch: 1/20... Training loss: 0.1662
Epoch: 1/20... Training loss: 0.1704
Epoch: 1/20... Training loss: 0.1739
Epoch: 1/20... Training loss: 0.1645
Epoch: 1/20... Training loss: 0.1708
Epoch: 1/20... Training loss: 0.1733
Epoch: 1/20... Training loss: 0.1737
Epoch: 1/20... Training loss: 0.1702
Epoch: 1/20... Training loss: 0.1719
Epoch: 1/20... Training loss: 0.1743
Epoch: 1/20... Training loss: 0.1696
Epoch: 1/20... Training loss: 0.1729
Epoch: 1/20... Training loss: 0.1669
Epoch: 1/20... Training loss: 0.1697
Epoch: 1/20... Training loss: 0.1673
Epoch: 1/20... Training loss: 0.1693
Epoch: 1/20... Training loss: 0.1640
Epoch: 1/20... Training loss: 0.1667
Epoch: 1/20... Training loss: 0.1710
Epoch: 1/20... Training loss: 0.1705
Epoch: 1/20... Training loss: 0.1690
Epoch: 1/20... Training loss: 0.1697
Epoch: 1/20... Training loss: 0.1668
Epoch: 1/20... Training loss: 0.1709
Epoch: 1/20... Training loss: 0.1670
Epoch: 1/20... Training loss: 0.1671
Epoch: 1/20... Training loss: 0.1644
Epoch: 1/20... Training loss: 0.1684
Epoch: 1/20... Training loss: 0.1672
Epoch: 1/20... Training loss: 0.1634
Epoch: 1/20... Training loss: 0.1649
Epoch: 1/20... Training loss: 0.1674
Epoch: 1/20... Training loss: 0.1597
Epoch: 1/20... Training loss: 0.1624
Epoch: 1/20... Training loss: 0.1642
Epoch: 1/20... Training loss: 0.1615
Epoch: 1/20... Training loss: 0.1570
Epoch: 1/20... Training loss: 0.1656
Epoch: 1/20... Training loss: 0.1637
Epoch: 1/20... Training loss: 0.1680
Epoch: 1/20... Training loss: 0.1658
Epoch: 1/20... Training loss: 0.1588
Epoch: 1/20... Training loss: 0.1653
Epoch: 1/20... Training loss: 0.1635
Epoch: 1/20... Training loss: 0.1693
Epoch: 1/20... Training loss: 0.1628
Epoch: 1/20... Training loss: 0.1666
Epoch: 1/20... Training loss: 0.1628
Epoch: 1/20... Training loss: 0.1649
Epoch: 1/20... Training loss: 0.1585
Epoch: 1/20... Training loss: 0.1616
Epoch: 1/20... Training loss: 0.1644
Epoch: 1/20... Training loss: 0.1628
Epoch: 1/20... Training loss: 0.1566
Epoch: 1/20... Training loss: 0.1603
Epoch: 1/20... Training loss: 0.1588
Epoch: 1/20... Training loss: 0.1541
Epoch: 1/20... Training loss: 0.1619
Epoch: 1/20... Training loss: 0.1631
Epoch: 1/20... Training loss: 0.1605
Epoch: 1/20... Training loss: 0.1621
Epoch: 1/20... Training loss: 0.1596
Epoch: 1/20... Training loss: 0.1595
Epoch: 1/20... Training loss: 0.1604
Epoch: 1/20... Training loss: 0.1563
Epoch: 1/20... Training loss: 0.1588
Epoch: 1/20... Training loss: 0.1581
Epoch: 1/20... Training loss: 0.1611
Epoch: 1/20... Training loss: 0.1625
Epoch: 1/20... Training loss: 0.1597
Epoch: 1/20... Training loss: 0.1528
Epoch: 1/20... Training loss: 0.1582
Epoch: 1/20... Training loss: 0.1597
Epoch: 1/20... Training loss: 0.1568
Epoch: 1/20... Training loss: 0.1551
Epoch: 1/20... Training loss: 0.1594
Epoch: 1/20... Training loss: 0.1568
Epoch: 1/20... Training loss: 0.1571
Epoch: 1/20... Training loss: 0.1583
Epoch: 1/20... Training loss: 0.1609
Epoch: 1/20... Training loss: 0.1601
Epoch: 1/20... Training loss: 0.1581
Epoch: 1/20... Training loss: 0.1615
Epoch: 1/20... Training loss: 0.1591
Epoch: 1/20... Training loss: 0.1554
Epoch: 1/20... Training loss: 0.1580
Epoch: 1/20... Training loss: 0.1462
Epoch: 1/20... Training loss: 0.1578
Epoch: 1/20... Training loss: 0.1536
Epoch: 1/20... Training loss: 0.1529
Epoch: 1/20... Training loss: 0.1555
Epoch: 1/20... Training loss: 0.1560
Epoch: 1/20... Training loss: 0.1561
Epoch: 1/20... Training loss: 0.1587
Epoch: 1/20... Training loss: 0.1538
Epoch: 1/20... Training loss: 0.1522
Epoch: 1/20... Training loss: 0.1529
Epoch: 1/20... Training loss: 0.1545
Epoch: 2/20... Training loss: 0.1549
Epoch: 2/20... Training loss: 0.1587
Epoch: 2/20... Training loss: 0.1499
Epoch: 2/20... Training loss: 0.1558
Epoch: 2/20... Training loss: 0.1515
Epoch: 2/20... Training loss: 0.1509
Epoch: 2/20... Training loss: 0.1536
Epoch: 2/20... Training loss: 0.1556
Epoch: 2/20... Training loss: 0.1489
Epoch: 2/20... Training loss: 0.1523
Epoch: 2/20... Training loss: 0.1541
Epoch: 2/20... Training loss: 0.1508
Epoch: 2/20... Training loss: 0.1518
Epoch: 2/20... Training loss: 0.1542
Epoch: 2/20... Training loss: 0.1616
Epoch: 2/20... Training loss: 0.1525
Epoch: 2/20... Training loss: 0.1552
Epoch: 2/20... Training loss: 0.1497
Epoch: 2/20... Training loss: 0.1495
Epoch: 2/20... Training loss: 0.1507
Epoch: 2/20... Training loss: 0.1466
Epoch: 2/20... Training loss: 0.1507
Epoch: 2/20... Training loss: 0.1482
Epoch: 2/20... Training loss: 0.1541
Epoch: 2/20... Training loss: 0.1498
Epoch: 2/20... Training loss: 0.1523
Epoch: 2/20... Training loss: 0.1497
Epoch: 2/20... Training loss: 0.1519
Epoch: 2/20... Training loss: 0.1495
Epoch: 2/20... Training loss: 0.1544
Epoch: 2/20... Training loss: 0.1523
Epoch: 2/20... Training loss: 0.1505
Epoch: 2/20... Training loss: 0.1488
Epoch: 2/20... Training loss: 0.1560
Epoch: 2/20... Training loss: 0.1570
Epoch: 2/20... Training loss: 0.1556
Epoch: 2/20... Training loss: 0.1463
Epoch: 2/20... Training loss: 0.1510
Epoch: 2/20... Training loss: 0.1547
Epoch: 2/20... Training loss: 0.1462
Epoch: 2/20... Training loss: 0.1484
Epoch: 2/20... Training loss: 0.1531
Epoch: 2/20... Training loss: 0.1518
Epoch: 2/20... Training loss: 0.1461
Epoch: 2/20... Training loss: 0.1473
Epoch: 2/20... Training loss: 0.1564
Epoch: 2/20... Training loss: 0.1476
Epoch: 2/20... Training loss: 0.1450
Epoch: 2/20... Training loss: 0.1495
Epoch: 2/20... Training loss: 0.1438
Epoch: 2/20... Training loss: 0.1501
Epoch: 2/20... Training loss: 0.1500
Epoch: 2/20... Training loss: 0.1497
Epoch: 2/20... Training loss: 0.1530
Epoch: 2/20... Training loss: 0.1457
Epoch: 2/20... Training loss: 0.1497
Epoch: 2/20... Training loss: 0.1453
Epoch: 2/20... Training loss: 0.1504
Epoch: 2/20... Training loss: 0.1465
Epoch: 2/20... Training loss: 0.1539
Epoch: 2/20... Training loss: 0.1495
Epoch: 2/20... Training loss: 0.1498
Epoch: 2/20... Training loss: 0.1458
Epoch: 2/20... Training loss: 0.1421
Epoch: 2/20... Training loss: 0.1476
Epoch: 2/20... Training loss: 0.1444
Epoch: 2/20... Training loss: 0.1489
Epoch: 2/20... Training loss: 0.1522
Epoch: 2/20... Training loss: 0.1508
Epoch: 2/20... Training loss: 0.1453
Epoch: 2/20... Training loss: 0.1527
Epoch: 2/20... Training loss: 0.1412
Epoch: 2/20... Training loss: 0.1496
Epoch: 2/20... Training loss: 0.1481
Epoch: 2/20... Training loss: 0.1449
Epoch: 2/20... Training loss: 0.1462
Epoch: 2/20... Training loss: 0.1456
Epoch: 2/20... Training loss: 0.1395
Epoch: 2/20... Training loss: 0.1478
Epoch: 2/20... Training loss: 0.1519
Epoch: 2/20... Training loss: 0.1502
Epoch: 2/20... Training loss: 0.1507
Epoch: 2/20... Training loss: 0.1494
Epoch: 2/20... Training loss: 0.1410
Epoch: 2/20... Training loss: 0.1457
Epoch: 2/20... Training loss: 0.1456
Epoch: 2/20... Training loss: 0.1430
Epoch: 2/20... Training loss: 0.1510
Epoch: 2/20... Training loss: 0.1489
Epoch: 2/20... Training loss: 0.1476
Epoch: 2/20... Training loss: 0.1442
Epoch: 2/20... Training loss: 0.1434
Epoch: 2/20... Training loss: 0.1504
Epoch: 2/20... Training loss: 0.1470
Epoch: 2/20... Training loss: 0.1476
Epoch: 2/20... Training loss: 0.1456
Epoch: 2/20... Training loss: 0.1460
Epoch: 2/20... Training loss: 0.1478
Epoch: 2/20... Training loss: 0.1454
Epoch: 2/20... Training loss: 0.1432
Epoch: 2/20... Training loss: 0.1425
Epoch: 2/20... Training loss: 0.1439
Epoch: 2/20... Training loss: 0.1422
Epoch: 2/20... Training loss: 0.1473
Epoch: 2/20... Training loss: 0.1438
Epoch: 2/20... Training loss: 0.1429
Epoch: 2/20... Training loss: 0.1457
Epoch: 2/20... Training loss: 0.1420
Epoch: 2/20... Training loss: 0.1463
Epoch: 2/20... Training loss: 0.1416
Epoch: 2/20... Training loss: 0.1461
Epoch: 2/20... Training loss: 0.1398
Epoch: 2/20... Training loss: 0.1456
Epoch: 2/20... Training loss: 0.1429
Epoch: 2/20... Training loss: 0.1416
Epoch: 2/20... Training loss: 0.1411
Epoch: 2/20... Training loss: 0.1426

In [13]:
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})

for images, row in zip([in_imgs, reconstructed], axes):
    for img, ax in zip(images, row):
        ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)


fig.tight_layout(pad=0.1)



In [19]:
sess.close()

Denoising

As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.

Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.

Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.


In [21]:
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')

### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16

### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32

logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1

decoded = tf.nn.sigmoid(logits, name='decoded')

loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)

In [22]:
sess = tf.Session()

In [ ]:
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
    for ii in range(mnist.train.num_examples//batch_size):
        batch = mnist.train.next_batch(batch_size)
        # Get images from the batch
        imgs = batch[0].reshape((-1, 28, 28, 1))
        
        # Add random noise to the input images
        noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
        # Clip the images to be between 0 and 1
        noisy_imgs = np.clip(noisy_imgs, 0., 1.)
        
        # Noisy images as inputs, original images as targets
        batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
                                                         targets_: imgs})

        print("Epoch: {}/{}...".format(e+1, epochs),
              "Training loss: {:.4f}".format(batch_cost))

Checking out the performance

Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.


In [29]:
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)

reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})

for images, row in zip([noisy_imgs, reconstructed], axes):
    for img, ax in zip(images, row):
        ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)

fig.tight_layout(pad=0.1)