A Simple Autoencoder

We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.

In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.


In [1]:
%matplotlib inline

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

In [2]:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)


Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.


In [3]:
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')


Out[3]:
<matplotlib.image.AxesImage at 0x116d5eb38>

We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.

Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.


In [10]:
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value

inputs_ = tf.placeholder(tf.float32,shape=(None,784))
targets_ = tf.placeholder(tf.float32,shape=(None,784))


# Output of hidden layer
encoded = tf.contrib.layers.fully_connected(inputs_,encoding_dim)
logits = tf.contrib.layers.fully_connected(encoded,784, activation_fn=None)
# Output layer logits
# Sigmoid output from logits
decoded = tf.sigmoid(logits)

# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Mean of the loss
cost = tf.reduce_mean(loss)

# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)

Training


In [12]:
# Create the session
sess = tf.Session()

Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.

Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).


In [14]:
epochs = 2
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
    for ii in range(mnist.train.num_examples//batch_size):
        batch = mnist.train.next_batch(batch_size)
        feed = {inputs_: batch[0], targets_: batch[0]}
        batch_cost, _ = sess.run([cost, opt], feed_dict=feed)

        print("Epoch: {}/{}...".format(e+1, epochs),
              "Training loss: {:.4f}".format(batch_cost))


Epoch: 1/2... Training loss: 0.6944
Epoch: 1/2... Training loss: 0.6911
Epoch: 1/2... Training loss: 0.6876
Epoch: 1/2... Training loss: 0.6836
Epoch: 1/2... Training loss: 0.6788
Epoch: 1/2... Training loss: 0.6735
Epoch: 1/2... Training loss: 0.6675
Epoch: 1/2... Training loss: 0.6595
Epoch: 1/2... Training loss: 0.6516
Epoch: 1/2... Training loss: 0.6398
Epoch: 1/2... Training loss: 0.6302
Epoch: 1/2... Training loss: 0.6178
Epoch: 1/2... Training loss: 0.6032
Epoch: 1/2... Training loss: 0.5915
Epoch: 1/2... Training loss: 0.5735
Epoch: 1/2... Training loss: 0.5594
Epoch: 1/2... Training loss: 0.5435
Epoch: 1/2... Training loss: 0.5284
Epoch: 1/2... Training loss: 0.5060
Epoch: 1/2... Training loss: 0.4868
Epoch: 1/2... Training loss: 0.4688
Epoch: 1/2... Training loss: 0.4595
Epoch: 1/2... Training loss: 0.4365
Epoch: 1/2... Training loss: 0.4276
Epoch: 1/2... Training loss: 0.4029
Epoch: 1/2... Training loss: 0.3873
Epoch: 1/2... Training loss: 0.3866
Epoch: 1/2... Training loss: 0.3795
Epoch: 1/2... Training loss: 0.3600
Epoch: 1/2... Training loss: 0.3522
Epoch: 1/2... Training loss: 0.3417
Epoch: 1/2... Training loss: 0.3300
Epoch: 1/2... Training loss: 0.3249
Epoch: 1/2... Training loss: 0.3223
Epoch: 1/2... Training loss: 0.3116
Epoch: 1/2... Training loss: 0.3137
Epoch: 1/2... Training loss: 0.3170
Epoch: 1/2... Training loss: 0.3114
Epoch: 1/2... Training loss: 0.3029
Epoch: 1/2... Training loss: 0.3058
Epoch: 1/2... Training loss: 0.3079
Epoch: 1/2... Training loss: 0.2906
Epoch: 1/2... Training loss: 0.2946
Epoch: 1/2... Training loss: 0.2889
Epoch: 1/2... Training loss: 0.2878
Epoch: 1/2... Training loss: 0.2884
Epoch: 1/2... Training loss: 0.2939
Epoch: 1/2... Training loss: 0.2845
Epoch: 1/2... Training loss: 0.2851
Epoch: 1/2... Training loss: 0.2839
Epoch: 1/2... Training loss: 0.2894
Epoch: 1/2... Training loss: 0.2787
Epoch: 1/2... Training loss: 0.2776
Epoch: 1/2... Training loss: 0.2829
Epoch: 1/2... Training loss: 0.2867
Epoch: 1/2... Training loss: 0.2868
Epoch: 1/2... Training loss: 0.2758
Epoch: 1/2... Training loss: 0.2744
Epoch: 1/2... Training loss: 0.2803
Epoch: 1/2... Training loss: 0.2738
Epoch: 1/2... Training loss: 0.2718
Epoch: 1/2... Training loss: 0.2765
Epoch: 1/2... Training loss: 0.2740
Epoch: 1/2... Training loss: 0.2736
Epoch: 1/2... Training loss: 0.2729
Epoch: 1/2... Training loss: 0.2702
Epoch: 1/2... Training loss: 0.2756
Epoch: 1/2... Training loss: 0.2684
Epoch: 1/2... Training loss: 0.2693
Epoch: 1/2... Training loss: 0.2719
Epoch: 1/2... Training loss: 0.2765
Epoch: 1/2... Training loss: 0.2657
Epoch: 1/2... Training loss: 0.2654
Epoch: 1/2... Training loss: 0.2647
Epoch: 1/2... Training loss: 0.2727
Epoch: 1/2... Training loss: 0.2639
Epoch: 1/2... Training loss: 0.2599
Epoch: 1/2... Training loss: 0.2631
Epoch: 1/2... Training loss: 0.2643
Epoch: 1/2... Training loss: 0.2602
Epoch: 1/2... Training loss: 0.2624
Epoch: 1/2... Training loss: 0.2633
Epoch: 1/2... Training loss: 0.2673
Epoch: 1/2... Training loss: 0.2590
Epoch: 1/2... Training loss: 0.2607
Epoch: 1/2... Training loss: 0.2623
Epoch: 1/2... Training loss: 0.2598
Epoch: 1/2... Training loss: 0.2584
Epoch: 1/2... Training loss: 0.2510
Epoch: 1/2... Training loss: 0.2594
Epoch: 1/2... Training loss: 0.2493
Epoch: 1/2... Training loss: 0.2550
Epoch: 1/2... Training loss: 0.2488
Epoch: 1/2... Training loss: 0.2523
Epoch: 1/2... Training loss: 0.2481
Epoch: 1/2... Training loss: 0.2525
Epoch: 1/2... Training loss: 0.2469
Epoch: 1/2... Training loss: 0.2519
Epoch: 1/2... Training loss: 0.2463
Epoch: 1/2... Training loss: 0.2477
Epoch: 1/2... Training loss: 0.2467
Epoch: 1/2... Training loss: 0.2457
Epoch: 1/2... Training loss: 0.2466
Epoch: 1/2... Training loss: 0.2469
Epoch: 1/2... Training loss: 0.2422
Epoch: 1/2... Training loss: 0.2419
Epoch: 1/2... Training loss: 0.2459
Epoch: 1/2... Training loss: 0.2385
Epoch: 1/2... Training loss: 0.2405
Epoch: 1/2... Training loss: 0.2335
Epoch: 1/2... Training loss: 0.2409
Epoch: 1/2... Training loss: 0.2360
Epoch: 1/2... Training loss: 0.2300
Epoch: 1/2... Training loss: 0.2433
Epoch: 1/2... Training loss: 0.2409
Epoch: 1/2... Training loss: 0.2358
Epoch: 1/2... Training loss: 0.2347
Epoch: 1/2... Training loss: 0.2397
Epoch: 1/2... Training loss: 0.2325
Epoch: 1/2... Training loss: 0.2352
Epoch: 1/2... Training loss: 0.2315
Epoch: 1/2... Training loss: 0.2331
Epoch: 1/2... Training loss: 0.2332
Epoch: 1/2... Training loss: 0.2380
Epoch: 1/2... Training loss: 0.2294
Epoch: 1/2... Training loss: 0.2367
Epoch: 1/2... Training loss: 0.2268
Epoch: 1/2... Training loss: 0.2310
Epoch: 1/2... Training loss: 0.2280
Epoch: 1/2... Training loss: 0.2327
Epoch: 1/2... Training loss: 0.2265
Epoch: 1/2... Training loss: 0.2221
Epoch: 1/2... Training loss: 0.2246
Epoch: 1/2... Training loss: 0.2255
Epoch: 1/2... Training loss: 0.2232
Epoch: 1/2... Training loss: 0.2253
Epoch: 1/2... Training loss: 0.2187
Epoch: 1/2... Training loss: 0.2283
Epoch: 1/2... Training loss: 0.2267
Epoch: 1/2... Training loss: 0.2211
Epoch: 1/2... Training loss: 0.2185
Epoch: 1/2... Training loss: 0.2196
Epoch: 1/2... Training loss: 0.2242
Epoch: 1/2... Training loss: 0.2224
Epoch: 1/2... Training loss: 0.2146
Epoch: 1/2... Training loss: 0.2211
Epoch: 1/2... Training loss: 0.2183
Epoch: 1/2... Training loss: 0.2207
Epoch: 1/2... Training loss: 0.2116
Epoch: 1/2... Training loss: 0.2103
Epoch: 1/2... Training loss: 0.2195
Epoch: 1/2... Training loss: 0.2194
Epoch: 1/2... Training loss: 0.2141
Epoch: 1/2... Training loss: 0.2172
Epoch: 1/2... Training loss: 0.2159
Epoch: 1/2... Training loss: 0.2091
Epoch: 1/2... Training loss: 0.2186
Epoch: 1/2... Training loss: 0.2208
Epoch: 1/2... Training loss: 0.2123
Epoch: 1/2... Training loss: 0.2130
Epoch: 1/2... Training loss: 0.2209
Epoch: 1/2... Training loss: 0.2048
Epoch: 1/2... Training loss: 0.2186
Epoch: 1/2... Training loss: 0.2129
Epoch: 1/2... Training loss: 0.2147
Epoch: 1/2... Training loss: 0.2096
Epoch: 1/2... Training loss: 0.2079
Epoch: 1/2... Training loss: 0.2094
Epoch: 1/2... Training loss: 0.2048
Epoch: 1/2... Training loss: 0.2102
Epoch: 1/2... Training loss: 0.2033
Epoch: 1/2... Training loss: 0.2069
Epoch: 1/2... Training loss: 0.2060
Epoch: 1/2... Training loss: 0.2116
Epoch: 1/2... Training loss: 0.2074
Epoch: 1/2... Training loss: 0.2069
Epoch: 1/2... Training loss: 0.2045
Epoch: 1/2... Training loss: 0.2033
Epoch: 1/2... Training loss: 0.2040
Epoch: 1/2... Training loss: 0.2060
Epoch: 1/2... Training loss: 0.2067
Epoch: 1/2... Training loss: 0.2031
Epoch: 1/2... Training loss: 0.1994
Epoch: 1/2... Training loss: 0.1981
Epoch: 1/2... Training loss: 0.1990
Epoch: 1/2... Training loss: 0.2065
Epoch: 1/2... Training loss: 0.2039
Epoch: 1/2... Training loss: 0.2023
Epoch: 1/2... Training loss: 0.2063
Epoch: 1/2... Training loss: 0.2024
Epoch: 1/2... Training loss: 0.1968
Epoch: 1/2... Training loss: 0.2029
Epoch: 1/2... Training loss: 0.1985
Epoch: 1/2... Training loss: 0.2013
Epoch: 1/2... Training loss: 0.2044
Epoch: 1/2... Training loss: 0.2096
Epoch: 1/2... Training loss: 0.2073
Epoch: 1/2... Training loss: 0.2062
Epoch: 1/2... Training loss: 0.1985
Epoch: 1/2... Training loss: 0.1921
Epoch: 1/2... Training loss: 0.1928
Epoch: 1/2... Training loss: 0.2005
Epoch: 1/2... Training loss: 0.1978
Epoch: 1/2... Training loss: 0.1925
Epoch: 1/2... Training loss: 0.1993
Epoch: 1/2... Training loss: 0.2016
Epoch: 1/2... Training loss: 0.2021
Epoch: 1/2... Training loss: 0.1993
Epoch: 1/2... Training loss: 0.1939
Epoch: 1/2... Training loss: 0.1925
Epoch: 1/2... Training loss: 0.2033
Epoch: 1/2... Training loss: 0.2001
Epoch: 1/2... Training loss: 0.1952
Epoch: 1/2... Training loss: 0.2017
Epoch: 1/2... Training loss: 0.1967
Epoch: 1/2... Training loss: 0.1967
Epoch: 1/2... Training loss: 0.1945
Epoch: 1/2... Training loss: 0.1961
Epoch: 1/2... Training loss: 0.1904
Epoch: 1/2... Training loss: 0.1973
Epoch: 1/2... Training loss: 0.1941
Epoch: 1/2... Training loss: 0.1887
Epoch: 1/2... Training loss: 0.1955
Epoch: 1/2... Training loss: 0.1925
Epoch: 1/2... Training loss: 0.1928
Epoch: 1/2... Training loss: 0.1901
Epoch: 1/2... Training loss: 0.1901
Epoch: 1/2... Training loss: 0.1870
Epoch: 1/2... Training loss: 0.1974
Epoch: 1/2... Training loss: 0.1925
Epoch: 1/2... Training loss: 0.1945
Epoch: 1/2... Training loss: 0.1854
Epoch: 1/2... Training loss: 0.1926
Epoch: 1/2... Training loss: 0.1905
Epoch: 1/2... Training loss: 0.1851
Epoch: 1/2... Training loss: 0.1933
Epoch: 1/2... Training loss: 0.1907
Epoch: 1/2... Training loss: 0.1886
Epoch: 1/2... Training loss: 0.1903
Epoch: 1/2... Training loss: 0.1838
Epoch: 1/2... Training loss: 0.1923
Epoch: 1/2... Training loss: 0.1912
Epoch: 1/2... Training loss: 0.1856
Epoch: 1/2... Training loss: 0.1826
Epoch: 1/2... Training loss: 0.1919
Epoch: 1/2... Training loss: 0.1903
Epoch: 1/2... Training loss: 0.1821
Epoch: 1/2... Training loss: 0.1924
Epoch: 1/2... Training loss: 0.1874
Epoch: 1/2... Training loss: 0.1885
Epoch: 1/2... Training loss: 0.1903
Epoch: 1/2... Training loss: 0.1816
Epoch: 1/2... Training loss: 0.1830
Epoch: 1/2... Training loss: 0.1854
Epoch: 1/2... Training loss: 0.1837
Epoch: 1/2... Training loss: 0.1861
Epoch: 1/2... Training loss: 0.1802
Epoch: 1/2... Training loss: 0.1860
Epoch: 1/2... Training loss: 0.1852
Epoch: 1/2... Training loss: 0.1881
Epoch: 1/2... Training loss: 0.1875
Epoch: 1/2... Training loss: 0.1871
Epoch: 1/2... Training loss: 0.1879
Epoch: 1/2... Training loss: 0.1802
Epoch: 1/2... Training loss: 0.1796
Epoch: 1/2... Training loss: 0.1733
Epoch: 1/2... Training loss: 0.1784
Epoch: 1/2... Training loss: 0.1831
Epoch: 1/2... Training loss: 0.1877
Epoch: 1/2... Training loss: 0.1802
Epoch: 1/2... Training loss: 0.1783
Epoch: 1/2... Training loss: 0.1836
Epoch: 1/2... Training loss: 0.1841
Epoch: 1/2... Training loss: 0.1806
Epoch: 1/2... Training loss: 0.1811
Epoch: 1/2... Training loss: 0.1830
Epoch: 1/2... Training loss: 0.1746
Epoch: 1/2... Training loss: 0.1830
Epoch: 1/2... Training loss: 0.1812
Epoch: 1/2... Training loss: 0.1821
Epoch: 1/2... Training loss: 0.1839
Epoch: 1/2... Training loss: 0.1847
Epoch: 1/2... Training loss: 0.1813
Epoch: 1/2... Training loss: 0.1795
Epoch: 1/2... Training loss: 0.1848
Epoch: 1/2... Training loss: 0.1808
Epoch: 1/2... Training loss: 0.1815
Epoch: 1/2... Training loss: 0.1760
Epoch: 1/2... Training loss: 0.1790
Epoch: 1/2... Training loss: 0.1822
Epoch: 1/2... Training loss: 0.1773
Epoch: 1/2... Training loss: 0.1780
Epoch: 1/2... Training loss: 0.1775
Epoch: 1/2... Training loss: 0.1799
Epoch: 1/2... Training loss: 0.1796
Epoch: 1/2... Training loss: 0.1757
Epoch: 1/2... Training loss: 0.1797
Epoch: 1/2... Training loss: 0.1752
Epoch: 1/2... Training loss: 0.1761
Epoch: 1/2... Training loss: 0.1756
Epoch: 2/2... Training loss: 0.1817
Epoch: 2/2... Training loss: 0.1822
Epoch: 2/2... Training loss: 0.1831
Epoch: 2/2... Training loss: 0.1829
Epoch: 2/2... Training loss: 0.1801
Epoch: 2/2... Training loss: 0.1775
Epoch: 2/2... Training loss: 0.1856
Epoch: 2/2... Training loss: 0.1758
Epoch: 2/2... Training loss: 0.1790
Epoch: 2/2... Training loss: 0.1716
Epoch: 2/2... Training loss: 0.1808
Epoch: 2/2... Training loss: 0.1734
Epoch: 2/2... Training loss: 0.1801
Epoch: 2/2... Training loss: 0.1788
Epoch: 2/2... Training loss: 0.1772
Epoch: 2/2... Training loss: 0.1716
Epoch: 2/2... Training loss: 0.1751
Epoch: 2/2... Training loss: 0.1746
Epoch: 2/2... Training loss: 0.1754
Epoch: 2/2... Training loss: 0.1736
Epoch: 2/2... Training loss: 0.1778
Epoch: 2/2... Training loss: 0.1801
Epoch: 2/2... Training loss: 0.1749
Epoch: 2/2... Training loss: 0.1694
Epoch: 2/2... Training loss: 0.1752
Epoch: 2/2... Training loss: 0.1681
Epoch: 2/2... Training loss: 0.1758
Epoch: 2/2... Training loss: 0.1675
Epoch: 2/2... Training loss: 0.1725
Epoch: 2/2... Training loss: 0.1818
Epoch: 2/2... Training loss: 0.1744
Epoch: 2/2... Training loss: 0.1703
Epoch: 2/2... Training loss: 0.1745
Epoch: 2/2... Training loss: 0.1694
Epoch: 2/2... Training loss: 0.1719
Epoch: 2/2... Training loss: 0.1718
Epoch: 2/2... Training loss: 0.1768
Epoch: 2/2... Training loss: 0.1706
Epoch: 2/2... Training loss: 0.1690
Epoch: 2/2... Training loss: 0.1745
Epoch: 2/2... Training loss: 0.1700
Epoch: 2/2... Training loss: 0.1771
Epoch: 2/2... Training loss: 0.1694
Epoch: 2/2... Training loss: 0.1718
Epoch: 2/2... Training loss: 0.1779
Epoch: 2/2... Training loss: 0.1749
Epoch: 2/2... Training loss: 0.1743
Epoch: 2/2... Training loss: 0.1669
Epoch: 2/2... Training loss: 0.1660
Epoch: 2/2... Training loss: 0.1651
Epoch: 2/2... Training loss: 0.1773
Epoch: 2/2... Training loss: 0.1707
Epoch: 2/2... Training loss: 0.1684
Epoch: 2/2... Training loss: 0.1685
Epoch: 2/2... Training loss: 0.1665
Epoch: 2/2... Training loss: 0.1711
Epoch: 2/2... Training loss: 0.1620
Epoch: 2/2... Training loss: 0.1662
Epoch: 2/2... Training loss: 0.1615
Epoch: 2/2... Training loss: 0.1718
Epoch: 2/2... Training loss: 0.1676
Epoch: 2/2... Training loss: 0.1670
Epoch: 2/2... Training loss: 0.1764
Epoch: 2/2... Training loss: 0.1679
Epoch: 2/2... Training loss: 0.1696
Epoch: 2/2... Training loss: 0.1678
Epoch: 2/2... Training loss: 0.1700
Epoch: 2/2... Training loss: 0.1669
Epoch: 2/2... Training loss: 0.1710
Epoch: 2/2... Training loss: 0.1678
Epoch: 2/2... Training loss: 0.1660
Epoch: 2/2... Training loss: 0.1702
Epoch: 2/2... Training loss: 0.1729
Epoch: 2/2... Training loss: 0.1625
Epoch: 2/2... Training loss: 0.1691
Epoch: 2/2... Training loss: 0.1647
Epoch: 2/2... Training loss: 0.1724
Epoch: 2/2... Training loss: 0.1703
Epoch: 2/2... Training loss: 0.1664
Epoch: 2/2... Training loss: 0.1676
Epoch: 2/2... Training loss: 0.1635
Epoch: 2/2... Training loss: 0.1683
Epoch: 2/2... Training loss: 0.1648
Epoch: 2/2... Training loss: 0.1713
Epoch: 2/2... Training loss: 0.1631
Epoch: 2/2... Training loss: 0.1659
Epoch: 2/2... Training loss: 0.1614
Epoch: 2/2... Training loss: 0.1627
Epoch: 2/2... Training loss: 0.1662
Epoch: 2/2... Training loss: 0.1672
Epoch: 2/2... Training loss: 0.1617
Epoch: 2/2... Training loss: 0.1672
Epoch: 2/2... Training loss: 0.1623
Epoch: 2/2... Training loss: 0.1560
Epoch: 2/2... Training loss: 0.1698
Epoch: 2/2... Training loss: 0.1650
Epoch: 2/2... Training loss: 0.1669
Epoch: 2/2... Training loss: 0.1620
Epoch: 2/2... Training loss: 0.1630
Epoch: 2/2... Training loss: 0.1641
Epoch: 2/2... Training loss: 0.1621
Epoch: 2/2... Training loss: 0.1581
Epoch: 2/2... Training loss: 0.1622
Epoch: 2/2... Training loss: 0.1641
Epoch: 2/2... Training loss: 0.1605
Epoch: 2/2... Training loss: 0.1693
Epoch: 2/2... Training loss: 0.1591
Epoch: 2/2... Training loss: 0.1646
Epoch: 2/2... Training loss: 0.1661
Epoch: 2/2... Training loss: 0.1611
Epoch: 2/2... Training loss: 0.1681
Epoch: 2/2... Training loss: 0.1660
Epoch: 2/2... Training loss: 0.1568
Epoch: 2/2... Training loss: 0.1664
Epoch: 2/2... Training loss: 0.1638
Epoch: 2/2... Training loss: 0.1602
Epoch: 2/2... Training loss: 0.1648
Epoch: 2/2... Training loss: 0.1609
Epoch: 2/2... Training loss: 0.1602
Epoch: 2/2... Training loss: 0.1655
Epoch: 2/2... Training loss: 0.1571
Epoch: 2/2... Training loss: 0.1595
Epoch: 2/2... Training loss: 0.1697
Epoch: 2/2... Training loss: 0.1648
Epoch: 2/2... Training loss: 0.1647
Epoch: 2/2... Training loss: 0.1595
Epoch: 2/2... Training loss: 0.1562
Epoch: 2/2... Training loss: 0.1634
Epoch: 2/2... Training loss: 0.1585
Epoch: 2/2... Training loss: 0.1623
Epoch: 2/2... Training loss: 0.1611
Epoch: 2/2... Training loss: 0.1529
Epoch: 2/2... Training loss: 0.1589
Epoch: 2/2... Training loss: 0.1617
Epoch: 2/2... Training loss: 0.1566
Epoch: 2/2... Training loss: 0.1669
Epoch: 2/2... Training loss: 0.1593
Epoch: 2/2... Training loss: 0.1586
Epoch: 2/2... Training loss: 0.1636
Epoch: 2/2... Training loss: 0.1539
Epoch: 2/2... Training loss: 0.1625
Epoch: 2/2... Training loss: 0.1560
Epoch: 2/2... Training loss: 0.1539
Epoch: 2/2... Training loss: 0.1536
Epoch: 2/2... Training loss: 0.1595
Epoch: 2/2... Training loss: 0.1549
Epoch: 2/2... Training loss: 0.1549
Epoch: 2/2... Training loss: 0.1640
Epoch: 2/2... Training loss: 0.1570
Epoch: 2/2... Training loss: 0.1611
Epoch: 2/2... Training loss: 0.1599
Epoch: 2/2... Training loss: 0.1597
Epoch: 2/2... Training loss: 0.1604
Epoch: 2/2... Training loss: 0.1604
Epoch: 2/2... Training loss: 0.1590
Epoch: 2/2... Training loss: 0.1619
Epoch: 2/2... Training loss: 0.1602
Epoch: 2/2... Training loss: 0.1518
Epoch: 2/2... Training loss: 0.1553
Epoch: 2/2... Training loss: 0.1547
Epoch: 2/2... Training loss: 0.1637
Epoch: 2/2... Training loss: 0.1519
Epoch: 2/2... Training loss: 0.1542
Epoch: 2/2... Training loss: 0.1624
Epoch: 2/2... Training loss: 0.1619
Epoch: 2/2... Training loss: 0.1583
Epoch: 2/2... Training loss: 0.1583
Epoch: 2/2... Training loss: 0.1552
Epoch: 2/2... Training loss: 0.1595
Epoch: 2/2... Training loss: 0.1593
Epoch: 2/2... Training loss: 0.1556
Epoch: 2/2... Training loss: 0.1612
Epoch: 2/2... Training loss: 0.1616
Epoch: 2/2... Training loss: 0.1516
Epoch: 2/2... Training loss: 0.1544
Epoch: 2/2... Training loss: 0.1554
Epoch: 2/2... Training loss: 0.1604
Epoch: 2/2... Training loss: 0.1543
Epoch: 2/2... Training loss: 0.1565
Epoch: 2/2... Training loss: 0.1608
Epoch: 2/2... Training loss: 0.1583
Epoch: 2/2... Training loss: 0.1565
Epoch: 2/2... Training loss: 0.1553
Epoch: 2/2... Training loss: 0.1608
Epoch: 2/2... Training loss: 0.1556
Epoch: 2/2... Training loss: 0.1573
Epoch: 2/2... Training loss: 0.1535
Epoch: 2/2... Training loss: 0.1513
Epoch: 2/2... Training loss: 0.1544
Epoch: 2/2... Training loss: 0.1595
Epoch: 2/2... Training loss: 0.1526
Epoch: 2/2... Training loss: 0.1546
Epoch: 2/2... Training loss: 0.1587
Epoch: 2/2... Training loss: 0.1552
Epoch: 2/2... Training loss: 0.1576
Epoch: 2/2... Training loss: 0.1558
Epoch: 2/2... Training loss: 0.1565
Epoch: 2/2... Training loss: 0.1609
Epoch: 2/2... Training loss: 0.1523
Epoch: 2/2... Training loss: 0.1552
Epoch: 2/2... Training loss: 0.1561
Epoch: 2/2... Training loss: 0.1549
Epoch: 2/2... Training loss: 0.1514
Epoch: 2/2... Training loss: 0.1589
Epoch: 2/2... Training loss: 0.1558
Epoch: 2/2... Training loss: 0.1593
Epoch: 2/2... Training loss: 0.1527
Epoch: 2/2... Training loss: 0.1529
Epoch: 2/2... Training loss: 0.1506
Epoch: 2/2... Training loss: 0.1530
Epoch: 2/2... Training loss: 0.1519
Epoch: 2/2... Training loss: 0.1572
Epoch: 2/2... Training loss: 0.1473
Epoch: 2/2... Training loss: 0.1505
Epoch: 2/2... Training loss: 0.1512
Epoch: 2/2... Training loss: 0.1494
Epoch: 2/2... Training loss: 0.1470
Epoch: 2/2... Training loss: 0.1490
Epoch: 2/2... Training loss: 0.1524
Epoch: 2/2... Training loss: 0.1527
Epoch: 2/2... Training loss: 0.1585
Epoch: 2/2... Training loss: 0.1498
Epoch: 2/2... Training loss: 0.1519
Epoch: 2/2... Training loss: 0.1554
Epoch: 2/2... Training loss: 0.1463
Epoch: 2/2... Training loss: 0.1496
Epoch: 2/2... Training loss: 0.1532
Epoch: 2/2... Training loss: 0.1481
Epoch: 2/2... Training loss: 0.1551
Epoch: 2/2... Training loss: 0.1498
Epoch: 2/2... Training loss: 0.1518
Epoch: 2/2... Training loss: 0.1545
Epoch: 2/2... Training loss: 0.1458
Epoch: 2/2... Training loss: 0.1519
Epoch: 2/2... Training loss: 0.1448
Epoch: 2/2... Training loss: 0.1492
Epoch: 2/2... Training loss: 0.1471
Epoch: 2/2... Training loss: 0.1489
Epoch: 2/2... Training loss: 0.1441
Epoch: 2/2... Training loss: 0.1474
Epoch: 2/2... Training loss: 0.1495
Epoch: 2/2... Training loss: 0.1465
Epoch: 2/2... Training loss: 0.1478
Epoch: 2/2... Training loss: 0.1460
Epoch: 2/2... Training loss: 0.1508
Epoch: 2/2... Training loss: 0.1463
Epoch: 2/2... Training loss: 0.1510
Epoch: 2/2... Training loss: 0.1473
Epoch: 2/2... Training loss: 0.1475
Epoch: 2/2... Training loss: 0.1556
Epoch: 2/2... Training loss: 0.1470
Epoch: 2/2... Training loss: 0.1495
Epoch: 2/2... Training loss: 0.1471
Epoch: 2/2... Training loss: 0.1466
Epoch: 2/2... Training loss: 0.1496
Epoch: 2/2... Training loss: 0.1450
Epoch: 2/2... Training loss: 0.1445
Epoch: 2/2... Training loss: 0.1506
Epoch: 2/2... Training loss: 0.1470
Epoch: 2/2... Training loss: 0.1450
Epoch: 2/2... Training loss: 0.1494
Epoch: 2/2... Training loss: 0.1478
Epoch: 2/2... Training loss: 0.1497
Epoch: 2/2... Training loss: 0.1492
Epoch: 2/2... Training loss: 0.1435
Epoch: 2/2... Training loss: 0.1481
Epoch: 2/2... Training loss: 0.1455
Epoch: 2/2... Training loss: 0.1450
Epoch: 2/2... Training loss: 0.1528
Epoch: 2/2... Training loss: 0.1469
Epoch: 2/2... Training loss: 0.1463
Epoch: 2/2... Training loss: 0.1428
Epoch: 2/2... Training loss: 0.1472
Epoch: 2/2... Training loss: 0.1466
Epoch: 2/2... Training loss: 0.1479
Epoch: 2/2... Training loss: 0.1446
Epoch: 2/2... Training loss: 0.1444
Epoch: 2/2... Training loss: 0.1451
Epoch: 2/2... Training loss: 0.1461
Epoch: 2/2... Training loss: 0.1428
Epoch: 2/2... Training loss: 0.1481
Epoch: 2/2... Training loss: 0.1466
Epoch: 2/2... Training loss: 0.1413
Epoch: 2/2... Training loss: 0.1461
Epoch: 2/2... Training loss: 0.1482
Epoch: 2/2... Training loss: 0.1442
Epoch: 2/2... Training loss: 0.1424
Epoch: 2/2... Training loss: 0.1469
Epoch: 2/2... Training loss: 0.1363
Epoch: 2/2... Training loss: 0.1411
Epoch: 2/2... Training loss: 0.1403
Epoch: 2/2... Training loss: 0.1464
Epoch: 2/2... Training loss: 0.1462
Epoch: 2/2... Training loss: 0.1470
Epoch: 2/2... Training loss: 0.1431
Epoch: 2/2... Training loss: 0.1429
Epoch: 2/2... Training loss: 0.1438
Epoch: 2/2... Training loss: 0.1435
Epoch: 2/2... Training loss: 0.1457
Epoch: 2/2... Training loss: 0.1471

Checking out the results

Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.


In [15]:
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})

for images, row in zip([in_imgs, reconstructed], axes):
    for img, ax in zip(images, row):
        ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)

fig.tight_layout(pad=0.1)



In [29]:
comp1 = compressed/np.max(compressed)
fig1, axes1 = plt.subplots(nrows=1, ncols=10, sharex=True, sharey=True, figsize=(100,4))

for i,ax in enumerate(axes1):
    ax.imshow(comp1[i].reshape(1,32), cmap='Greys_r')
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)



In [ ]:
sess.close()

Up Next

We're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.

In practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build.