Convolutional Autoencoder

Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.


In [1]:
%matplotlib inline

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

In [2]:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)


Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

In [3]:
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')


Out[3]:
<matplotlib.image.AxesImage at 0x1245a6ba8>

Network Architecture

The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.

Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.

What's going on with the decoder

Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find the TensorFlow API, with tf.nn.conv2d_transpose.

However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.

Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.


In [4]:
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')

### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8

### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16

logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1

decoded = tf.nn.sigmoid(logits, name='decoded')

loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)

Training

As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.


In [5]:
sess = tf.Session()

In [ ]:
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
    for ii in range(mnist.train.num_examples//batch_size):
        batch = mnist.train.next_batch(batch_size)
        imgs = batch[0].reshape((-1, 28, 28, 1))
        batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
                                                         targets_: imgs})

        print("Epoch: {}/{}...".format(e+1, epochs),
              "Training loss: {:.4f}".format(batch_cost))


Epoch: 1/20... Training loss: 0.6917
Epoch: 1/20... Training loss: 0.6890
Epoch: 1/20... Training loss: 0.6855
Epoch: 1/20... Training loss: 0.6816
Epoch: 1/20... Training loss: 0.6766
Epoch: 1/20... Training loss: 0.6699
Epoch: 1/20... Training loss: 0.6660
Epoch: 1/20... Training loss: 0.6532
Epoch: 1/20... Training loss: 0.6420
Epoch: 1/20... Training loss: 0.6314
Epoch: 1/20... Training loss: 0.6167
Epoch: 1/20... Training loss: 0.5965
Epoch: 1/20... Training loss: 0.5764
Epoch: 1/20... Training loss: 0.5553
Epoch: 1/20... Training loss: 0.5340
Epoch: 1/20... Training loss: 0.5053
Epoch: 1/20... Training loss: 0.4972
Epoch: 1/20... Training loss: 0.4911
Epoch: 1/20... Training loss: 0.5317
Epoch: 1/20... Training loss: 0.5042
Epoch: 1/20... Training loss: 0.5059
Epoch: 1/20... Training loss: 0.4947
Epoch: 1/20... Training loss: 0.4949
Epoch: 1/20... Training loss: 0.5082
Epoch: 1/20... Training loss: 0.4685
Epoch: 1/20... Training loss: 0.4639
Epoch: 1/20... Training loss: 0.4627
Epoch: 1/20... Training loss: 0.4549
Epoch: 1/20... Training loss: 0.4383
Epoch: 1/20... Training loss: 0.4364
Epoch: 1/20... Training loss: 0.4550
Epoch: 1/20... Training loss: 0.4287
Epoch: 1/20... Training loss: 0.4029
Epoch: 1/20... Training loss: 0.3777
Epoch: 1/20... Training loss: 0.3859
Epoch: 1/20... Training loss: 0.3636
Epoch: 1/20... Training loss: 0.3396
Epoch: 1/20... Training loss: 0.3485
Epoch: 1/20... Training loss: 0.3543
Epoch: 1/20... Training loss: 0.3826
Epoch: 1/20... Training loss: 0.3463
Epoch: 1/20... Training loss: 0.3342
Epoch: 1/20... Training loss: 0.3507
Epoch: 1/20... Training loss: 0.3336
Epoch: 1/20... Training loss: 0.3124
Epoch: 1/20... Training loss: 0.3087
Epoch: 1/20... Training loss: 0.3067
Epoch: 1/20... Training loss: 0.3093
Epoch: 1/20... Training loss: 0.2999
Epoch: 1/20... Training loss: 0.2832
Epoch: 1/20... Training loss: 0.2701
Epoch: 1/20... Training loss: 0.3205
Epoch: 1/20... Training loss: 0.2889
Epoch: 1/20... Training loss: 0.2621
Epoch: 1/20... Training loss: 0.2714
Epoch: 1/20... Training loss: 0.2690
Epoch: 1/20... Training loss: 0.2577
Epoch: 1/20... Training loss: 0.2532
Epoch: 1/20... Training loss: 0.2483
Epoch: 1/20... Training loss: 0.2511
Epoch: 1/20... Training loss: 0.2565
Epoch: 1/20... Training loss: 0.2562
Epoch: 1/20... Training loss: 0.2630
Epoch: 1/20... Training loss: 0.2598
Epoch: 1/20... Training loss: 0.2500
Epoch: 1/20... Training loss: 0.2608
Epoch: 1/20... Training loss: 0.2530
Epoch: 1/20... Training loss: 0.2604
Epoch: 1/20... Training loss: 0.2294
Epoch: 1/20... Training loss: 0.2406
Epoch: 1/20... Training loss: 0.2615
Epoch: 1/20... Training loss: 0.2470
Epoch: 1/20... Training loss: 0.2321
Epoch: 1/20... Training loss: 0.2417
Epoch: 1/20... Training loss: 0.2287
Epoch: 1/20... Training loss: 0.2262
Epoch: 1/20... Training loss: 0.2219
Epoch: 1/20... Training loss: 0.2235
Epoch: 1/20... Training loss: 0.2308
Epoch: 1/20... Training loss: 0.2239
Epoch: 1/20... Training loss: 0.2313
Epoch: 1/20... Training loss: 0.2232
Epoch: 1/20... Training loss: 0.2268
Epoch: 1/20... Training loss: 0.2221
Epoch: 1/20... Training loss: 0.2167
Epoch: 1/20... Training loss: 0.2281
Epoch: 1/20... Training loss: 0.2326
Epoch: 1/20... Training loss: 0.2209
Epoch: 1/20... Training loss: 0.2134
Epoch: 1/20... Training loss: 0.2099
Epoch: 1/20... Training loss: 0.2110
Epoch: 1/20... Training loss: 0.2097
Epoch: 1/20... Training loss: 0.2165
Epoch: 1/20... Training loss: 0.2208
Epoch: 1/20... Training loss: 0.2152
Epoch: 1/20... Training loss: 0.2090
Epoch: 1/20... Training loss: 0.2367
Epoch: 1/20... Training loss: 0.2251
Epoch: 1/20... Training loss: 0.2273
Epoch: 1/20... Training loss: 0.2322
Epoch: 1/20... Training loss: 0.2225
Epoch: 1/20... Training loss: 0.2134
Epoch: 1/20... Training loss: 0.2154
Epoch: 1/20... Training loss: 0.2126
Epoch: 1/20... Training loss: 0.2110
Epoch: 1/20... Training loss: 0.2076
Epoch: 1/20... Training loss: 0.2205
Epoch: 1/20... Training loss: 0.2155
Epoch: 1/20... Training loss: 0.2065
Epoch: 1/20... Training loss: 0.2256
Epoch: 1/20... Training loss: 0.2219
Epoch: 1/20... Training loss: 0.2109
Epoch: 1/20... Training loss: 0.2069
Epoch: 1/20... Training loss: 0.2196
Epoch: 1/20... Training loss: 0.2161
Epoch: 1/20... Training loss: 0.2184
Epoch: 1/20... Training loss: 0.2094
Epoch: 1/20... Training loss: 0.2053
Epoch: 1/20... Training loss: 0.2063
Epoch: 1/20... Training loss: 0.2147
Epoch: 1/20... Training loss: 0.1987
Epoch: 1/20... Training loss: 0.2002
Epoch: 1/20... Training loss: 0.2136
Epoch: 1/20... Training loss: 0.2032
Epoch: 1/20... Training loss: 0.1900
Epoch: 1/20... Training loss: 0.2062
Epoch: 1/20... Training loss: 0.2163
Epoch: 1/20... Training loss: 0.2056
Epoch: 1/20... Training loss: 0.2057
Epoch: 1/20... Training loss: 0.1899
Epoch: 1/20... Training loss: 0.2013
Epoch: 1/20... Training loss: 0.1965
Epoch: 1/20... Training loss: 0.1948
Epoch: 1/20... Training loss: 0.2053
Epoch: 1/20... Training loss: 0.1969
Epoch: 1/20... Training loss: 0.2149
Epoch: 1/20... Training loss: 0.2126
Epoch: 1/20... Training loss: 0.2055
Epoch: 1/20... Training loss: 0.2008
Epoch: 1/20... Training loss: 0.2058
Epoch: 1/20... Training loss: 0.2055
Epoch: 1/20... Training loss: 0.2084
Epoch: 1/20... Training loss: 0.2066
Epoch: 1/20... Training loss: 0.2035
Epoch: 1/20... Training loss: 0.2100
Epoch: 1/20... Training loss: 0.2050
Epoch: 1/20... Training loss: 0.2066
Epoch: 1/20... Training loss: 0.1992
Epoch: 1/20... Training loss: 0.1913
Epoch: 1/20... Training loss: 0.1805
Epoch: 1/20... Training loss: 0.1957
Epoch: 1/20... Training loss: 0.2009
Epoch: 1/20... Training loss: 0.1993
Epoch: 1/20... Training loss: 0.1853
Epoch: 1/20... Training loss: 0.1907
Epoch: 1/20... Training loss: 0.1950
Epoch: 1/20... Training loss: 0.1875
Epoch: 1/20... Training loss: 0.1926
Epoch: 1/20... Training loss: 0.1925
Epoch: 1/20... Training loss: 0.1930
Epoch: 1/20... Training loss: 0.1971
Epoch: 1/20... Training loss: 0.1948
Epoch: 1/20... Training loss: 0.1965
Epoch: 1/20... Training loss: 0.2011
Epoch: 1/20... Training loss: 0.1962
Epoch: 1/20... Training loss: 0.1979
Epoch: 1/20... Training loss: 0.1998
Epoch: 1/20... Training loss: 0.1892
Epoch: 1/20... Training loss: 0.1872
Epoch: 1/20... Training loss: 0.1913
Epoch: 1/20... Training loss: 0.1905
Epoch: 1/20... Training loss: 0.1922
Epoch: 1/20... Training loss: 0.2003
Epoch: 1/20... Training loss: 0.1952
Epoch: 1/20... Training loss: 0.1905
Epoch: 1/20... Training loss: 0.1939
Epoch: 1/20... Training loss: 0.1870
Epoch: 1/20... Training loss: 0.1974
Epoch: 1/20... Training loss: 0.1957
Epoch: 1/20... Training loss: 0.1903
Epoch: 1/20... Training loss: 0.1942
Epoch: 1/20... Training loss: 0.2056
Epoch: 1/20... Training loss: 0.1974
Epoch: 1/20... Training loss: 0.2085
Epoch: 1/20... Training loss: 0.1929
Epoch: 1/20... Training loss: 0.1919
Epoch: 1/20... Training loss: 0.1951
Epoch: 1/20... Training loss: 0.1901
Epoch: 1/20... Training loss: 0.1970
Epoch: 1/20... Training loss: 0.1912
Epoch: 1/20... Training loss: 0.1886
Epoch: 1/20... Training loss: 0.1877
Epoch: 1/20... Training loss: 0.1832
Epoch: 1/20... Training loss: 0.1965
Epoch: 1/20... Training loss: 0.1906
Epoch: 1/20... Training loss: 0.1892
Epoch: 1/20... Training loss: 0.1919
Epoch: 1/20... Training loss: 0.1833
Epoch: 1/20... Training loss: 0.1860
Epoch: 1/20... Training loss: 0.1897
Epoch: 1/20... Training loss: 0.1888
Epoch: 1/20... Training loss: 0.1835
Epoch: 1/20... Training loss: 0.1978
Epoch: 1/20... Training loss: 0.1829
Epoch: 1/20... Training loss: 0.1843
Epoch: 1/20... Training loss: 0.1871
Epoch: 1/20... Training loss: 0.1898
Epoch: 1/20... Training loss: 0.1808
Epoch: 1/20... Training loss: 0.1740
Epoch: 1/20... Training loss: 0.1847
Epoch: 1/20... Training loss: 0.1838
Epoch: 1/20... Training loss: 0.1897
Epoch: 1/20... Training loss: 0.1884
Epoch: 1/20... Training loss: 0.1972
Epoch: 1/20... Training loss: 0.1850
Epoch: 1/20... Training loss: 0.1749
Epoch: 1/20... Training loss: 0.1868
Epoch: 1/20... Training loss: 0.1876
Epoch: 1/20... Training loss: 0.1837
Epoch: 1/20... Training loss: 0.1791
Epoch: 1/20... Training loss: 0.1866
Epoch: 1/20... Training loss: 0.1831
Epoch: 1/20... Training loss: 0.1854
Epoch: 1/20... Training loss: 0.1833
Epoch: 1/20... Training loss: 0.1834
Epoch: 1/20... Training loss: 0.1849
Epoch: 1/20... Training loss: 0.1909
Epoch: 1/20... Training loss: 0.1825
Epoch: 1/20... Training loss: 0.1875
Epoch: 1/20... Training loss: 0.1831
Epoch: 1/20... Training loss: 0.1784
Epoch: 1/20... Training loss: 0.1844
Epoch: 1/20... Training loss: 0.1850
Epoch: 1/20... Training loss: 0.1909
Epoch: 1/20... Training loss: 0.1899
Epoch: 1/20... Training loss: 0.1815
Epoch: 1/20... Training loss: 0.1749
Epoch: 1/20... Training loss: 0.1878
Epoch: 1/20... Training loss: 0.1930
Epoch: 1/20... Training loss: 0.1827
Epoch: 1/20... Training loss: 0.1882
Epoch: 1/20... Training loss: 0.1891
Epoch: 1/20... Training loss: 0.1871
Epoch: 1/20... Training loss: 0.1910
Epoch: 1/20... Training loss: 0.1757
Epoch: 1/20... Training loss: 0.1859
Epoch: 1/20... Training loss: 0.1837
Epoch: 1/20... Training loss: 0.1845
Epoch: 1/20... Training loss: 0.1855
Epoch: 1/20... Training loss: 0.1753
Epoch: 1/20... Training loss: 0.1732
Epoch: 1/20... Training loss: 0.1824
Epoch: 1/20... Training loss: 0.1815
Epoch: 1/20... Training loss: 0.1814
Epoch: 1/20... Training loss: 0.1784
Epoch: 1/20... Training loss: 0.1793
Epoch: 1/20... Training loss: 0.1844
Epoch: 1/20... Training loss: 0.1842
Epoch: 1/20... Training loss: 0.1799
Epoch: 1/20... Training loss: 0.1783
Epoch: 1/20... Training loss: 0.1709
Epoch: 1/20... Training loss: 0.1660
Epoch: 1/20... Training loss: 0.1842
Epoch: 1/20... Training loss: 0.1706
Epoch: 1/20... Training loss: 0.1787
Epoch: 1/20... Training loss: 0.1792
Epoch: 1/20... Training loss: 0.1799
Epoch: 1/20... Training loss: 0.1725
Epoch: 1/20... Training loss: 0.1744
Epoch: 1/20... Training loss: 0.1809
Epoch: 1/20... Training loss: 0.1827
Epoch: 1/20... Training loss: 0.1732
Epoch: 1/20... Training loss: 0.1828
Epoch: 1/20... Training loss: 0.1747
Epoch: 1/20... Training loss: 0.1753
Epoch: 1/20... Training loss: 0.1787
Epoch: 1/20... Training loss: 0.1735
Epoch: 1/20... Training loss: 0.1761
Epoch: 1/20... Training loss: 0.1852
Epoch: 1/20... Training loss: 0.1795
Epoch: 1/20... Training loss: 0.1737
Epoch: 1/20... Training loss: 0.1676
Epoch: 1/20... Training loss: 0.1750
Epoch: 1/20... Training loss: 0.1738
Epoch: 1/20... Training loss: 0.1720
Epoch: 1/20... Training loss: 0.1628
Epoch: 1/20... Training loss: 0.1767
Epoch: 1/20... Training loss: 0.1751
Epoch: 1/20... Training loss: 0.1633
Epoch: 1/20... Training loss: 0.1715
Epoch: 1/20... Training loss: 0.1653
Epoch: 1/20... Training loss: 0.1764
Epoch: 1/20... Training loss: 0.1720
Epoch: 1/20... Training loss: 0.1619
Epoch: 1/20... Training loss: 0.1784
Epoch: 1/20... Training loss: 0.1679
Epoch: 1/20... Training loss: 0.1834
Epoch: 1/20... Training loss: 0.1773
Epoch: 1/20... Training loss: 0.1659
Epoch: 1/20... Training loss: 0.1623
Epoch: 2/20... Training loss: 0.1653
Epoch: 2/20... Training loss: 0.1656
Epoch: 2/20... Training loss: 0.1695
Epoch: 2/20... Training loss: 0.1646
Epoch: 2/20... Training loss: 0.1689
Epoch: 2/20... Training loss: 0.1627
Epoch: 2/20... Training loss: 0.1682
Epoch: 2/20... Training loss: 0.1669
Epoch: 2/20... Training loss: 0.1665
Epoch: 2/20... Training loss: 0.1588
Epoch: 2/20... Training loss: 0.1623
Epoch: 2/20... Training loss: 0.1637
Epoch: 2/20... Training loss: 0.1611
Epoch: 2/20... Training loss: 0.1658
Epoch: 2/20... Training loss: 0.1647
Epoch: 2/20... Training loss: 0.1641
Epoch: 2/20... Training loss: 0.1647
Epoch: 2/20... Training loss: 0.1629
Epoch: 2/20... Training loss: 0.1649
Epoch: 2/20... Training loss: 0.1618
Epoch: 2/20... Training loss: 0.1625
Epoch: 2/20... Training loss: 0.1608
Epoch: 2/20... Training loss: 0.1582
Epoch: 2/20... Training loss: 0.1600
Epoch: 2/20... Training loss: 0.1606
Epoch: 2/20... Training loss: 0.1582
Epoch: 2/20... Training loss: 0.1572
Epoch: 2/20... Training loss: 0.1643
Epoch: 2/20... Training loss: 0.1582
Epoch: 2/20... Training loss: 0.1609
Epoch: 2/20... Training loss: 0.1595
Epoch: 2/20... Training loss: 0.1552
Epoch: 2/20... Training loss: 0.1608
Epoch: 2/20... Training loss: 0.1603
Epoch: 2/20... Training loss: 0.1537
Epoch: 2/20... Training loss: 0.1544
Epoch: 2/20... Training loss: 0.1585
Epoch: 2/20... Training loss: 0.1581
Epoch: 2/20... Training loss: 0.1644
Epoch: 2/20... Training loss: 0.1580
Epoch: 2/20... Training loss: 0.1583
Epoch: 2/20... Training loss: 0.1571
Epoch: 2/20... Training loss: 0.1551
Epoch: 2/20... Training loss: 0.1527
Epoch: 2/20... Training loss: 0.1574
Epoch: 2/20... Training loss: 0.1599
Epoch: 2/20... Training loss: 0.1557
Epoch: 2/20... Training loss: 0.1568
Epoch: 2/20... Training loss: 0.1607
Epoch: 2/20... Training loss: 0.1567
Epoch: 2/20... Training loss: 0.1591
Epoch: 2/20... Training loss: 0.1536
Epoch: 2/20... Training loss: 0.1505
Epoch: 2/20... Training loss: 0.1541
Epoch: 2/20... Training loss: 0.1558
Epoch: 2/20... Training loss: 0.1556
Epoch: 2/20... Training loss: 0.1541
Epoch: 2/20... Training loss: 0.1527
Epoch: 2/20... Training loss: 0.1536
Epoch: 2/20... Training loss: 0.1564
Epoch: 2/20... Training loss: 0.1583
Epoch: 2/20... Training loss: 0.1549
Epoch: 2/20... Training loss: 0.1550
Epoch: 2/20... Training loss: 0.1592
Epoch: 2/20... Training loss: 0.1550
Epoch: 2/20... Training loss: 0.1553
Epoch: 2/20... Training loss: 0.1538
Epoch: 2/20... Training loss: 0.1567
Epoch: 2/20... Training loss: 0.1528
Epoch: 2/20... Training loss: 0.1552
Epoch: 2/20... Training loss: 0.1554
Epoch: 2/20... Training loss: 0.1537
Epoch: 2/20... Training loss: 0.1541
Epoch: 2/20... Training loss: 0.1559
Epoch: 2/20... Training loss: 0.1525
Epoch: 2/20... Training loss: 0.1476
Epoch: 2/20... Training loss: 0.1569
Epoch: 2/20... Training loss: 0.1581
Epoch: 2/20... Training loss: 0.1529
Epoch: 2/20... Training loss: 0.1510
Epoch: 2/20... Training loss: 0.1524
Epoch: 2/20... Training loss: 0.1507
Epoch: 2/20... Training loss: 0.1523
Epoch: 2/20... Training loss: 0.1558
Epoch: 2/20... Training loss: 0.1484
Epoch: 2/20... Training loss: 0.1498
Epoch: 2/20... Training loss: 0.1546
Epoch: 2/20... Training loss: 0.1484
Epoch: 2/20... Training loss: 0.1538
Epoch: 2/20... Training loss: 0.1550
Epoch: 2/20... Training loss: 0.1505
Epoch: 2/20... Training loss: 0.1594
Epoch: 2/20... Training loss: 0.1526
Epoch: 2/20... Training loss: 0.1533
Epoch: 2/20... Training loss: 0.1515
Epoch: 2/20... Training loss: 0.1493
Epoch: 2/20... Training loss: 0.1493
Epoch: 2/20... Training loss: 0.1518
Epoch: 2/20... Training loss: 0.1534
Epoch: 2/20... Training loss: 0.1536
Epoch: 2/20... Training loss: 0.1503
Epoch: 2/20... Training loss: 0.1481
Epoch: 2/20... Training loss: 0.1479
Epoch: 2/20... Training loss: 0.1484
Epoch: 2/20... Training loss: 0.1470
Epoch: 2/20... Training loss: 0.1488
Epoch: 2/20... Training loss: 0.1504
Epoch: 2/20... Training loss: 0.1509
Epoch: 2/20... Training loss: 0.1480
Epoch: 2/20... Training loss: 0.1497
Epoch: 2/20... Training loss: 0.1476
Epoch: 2/20... Training loss: 0.1493
Epoch: 2/20... Training loss: 0.1456
Epoch: 2/20... Training loss: 0.1521
Epoch: 2/20... Training loss: 0.1451
Epoch: 2/20... Training loss: 0.1458
Epoch: 2/20... Training loss: 0.1437
Epoch: 2/20... Training loss: 0.1449
Epoch: 2/20... Training loss: 0.1482
Epoch: 2/20... Training loss: 0.1467
Epoch: 2/20... Training loss: 0.1474
Epoch: 2/20... Training loss: 0.1514
Epoch: 2/20... Training loss: 0.1423
Epoch: 2/20... Training loss: 0.1494
Epoch: 2/20... Training loss: 0.1503
Epoch: 2/20... Training loss: 0.1499
Epoch: 2/20... Training loss: 0.1511
Epoch: 2/20... Training loss: 0.1486
Epoch: 2/20... Training loss: 0.1501
Epoch: 2/20... Training loss: 0.1499
Epoch: 2/20... Training loss: 0.1436
Epoch: 2/20... Training loss: 0.1445
Epoch: 2/20... Training loss: 0.1473
Epoch: 2/20... Training loss: 0.1457
Epoch: 2/20... Training loss: 0.1425
Epoch: 2/20... Training loss: 0.1433
Epoch: 2/20... Training loss: 0.1499
Epoch: 2/20... Training loss: 0.1471
Epoch: 2/20... Training loss: 0.1465
Epoch: 2/20... Training loss: 0.1442
Epoch: 2/20... Training loss: 0.1427
Epoch: 2/20... Training loss: 0.1410
Epoch: 2/20... Training loss: 0.1433
Epoch: 2/20... Training loss: 0.1470
Epoch: 2/20... Training loss: 0.1465
Epoch: 2/20... Training loss: 0.1413
Epoch: 2/20... Training loss: 0.1360
Epoch: 2/20... Training loss: 0.1477
Epoch: 2/20... Training loss: 0.1406
Epoch: 2/20... Training loss: 0.1493
Epoch: 2/20... Training loss: 0.1444
Epoch: 2/20... Training loss: 0.1452
Epoch: 2/20... Training loss: 0.1418
Epoch: 2/20... Training loss: 0.1467
Epoch: 2/20... Training loss: 0.1449
Epoch: 2/20... Training loss: 0.1408
Epoch: 2/20... Training loss: 0.1434

In [13]:
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})

for images, row in zip([in_imgs, reconstructed], axes):
    for img, ax in zip(images, row):
        ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)


fig.tight_layout(pad=0.1)



In [19]:
sess.close()

Denoising

As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.

Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.

Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.


In [21]:
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')

### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16

### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32

logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1

decoded = tf.nn.sigmoid(logits, name='decoded')

loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)

In [22]:
sess = tf.Session()

In [ ]:
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
    for ii in range(mnist.train.num_examples//batch_size):
        batch = mnist.train.next_batch(batch_size)
        # Get images from the batch
        imgs = batch[0].reshape((-1, 28, 28, 1))
        
        # Add random noise to the input images
        noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
        # Clip the images to be between 0 and 1
        noisy_imgs = np.clip(noisy_imgs, 0., 1.)
        
        # Noisy images as inputs, original images as targets
        batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
                                                         targets_: imgs})

        print("Epoch: {}/{}...".format(e+1, epochs),
              "Training loss: {:.4f}".format(batch_cost))

Checking out the performance

Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.


In [29]:
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)

reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})

for images, row in zip([noisy_imgs, reconstructed], axes):
    for img, ax in zip(images, row):
        ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)

fig.tight_layout(pad=0.1)