06 变分自编码器

简介

变分自编码器(Variational Autoencoder,VAE)是生成式模型(Generative Model)的一种,另一种常见的生成式模型是生成式对抗网络(Generative Adversarial Network,GAN)

这里我们介绍下VAE的原理,并用Keras实现

原理

我们经常会有这样的需求:根据很多个样本,学会生成新的样本

以MNIST为例,在看过几千张手写数字图片之后,我们能进行模仿,并生成一些类似的图片,这些图片在原始数据中并不存在,有一些变化但是看起来相似

换言之,需要学会数据 $x$ 的分布 $P(X)$ ,这样,根据数据的分布就能轻松地产生新样本

但数据分布的估计不是件容易的事情,尤其是当数据量不足的时候

可以使用一个隐变量 $z$ ,由 z 经过一个复杂的映射得到 $x$ ,并且假设 $z$ 服从高斯分布

$$x=f(z;\theta)$$

因此只需要学习隐变量所服从高斯分布的参数,以及映射函数,即可得到原始数据的分布

为了学习隐变量所服从高斯分布的参数,需要得到 $z$ 足够多的样本

然而 $z$ 的样本并不能直接获得,因此还需要一个映射函数(条件概率分布),从已有的 x 样本中得到对应的 z 样本

$$z=Q(x)$$

这看起来和自编码器很相似,从数据本身,经编码得到隐层表示,经解码还原

但VAE和AE的区别如下:

AE中隐层表示的分布未知,而VAE中隐变量服从高斯分布 AE中学习的是encoder和decoder,VAE中还学习了隐变量的分布,包括高斯分布的均值和方差 AE只能从一个 x ,得到对应的重构 x VAE可以产生新的 z ,从而得到新的 x ,即生成新的样本 损失函数 除了重构误差,由于在VAE中我们假设隐变量 z 服从高斯分布,因此encoder对应的条件概率分布,应当和高斯分布尽可能相似

可以用相对熵,又称作KL散度(Kullback–Leibler Divergence),来衡量两个分布的差异,或者说距离,但相对熵是非对称的

$$ D(f\parallel g)=\int f(x)\log\frac{f(x)}{g(x)}dx $$

实现

这里以MNIST为例,学习隐变量 z 所服从高斯分布的均值和方差两个参数,从而可以从新的 z 生成原始数据中没有的 x

encoder和decoder各用两层全连接层,简单一些,主要为了说明VAE的实现


In [1]:
# 加载库


import numpy as np
import matplotlib.pyplot as plt

from keras.layers import Input, Dense, Lambda
from keras.models import Model
from keras import backend as K
from keras import objectives
from keras.datasets import fashion_mnist # 可以在MNIST这个数据集上跑一遍,数据集规模和FashionMNIST完全相同

# 定义一些常数

batch_size = 200
original_dim = 784
intermediate_dim = 256
latent_dim = 2
epochs = 30


Using TensorFlow backend.

In [2]:
# encoder部分,两层全连接层,隐层表示包括均值和方差

x = Input(shape=(original_dim,))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)

# Lambda层不参与训练,只参与计算,用于后面产生新的 z

def sampling(args):
    z_mean, z_log_var = args
    epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.)
    return z_mean + K.exp(z_log_var / 2) * epsilon

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

# decoder部分,两层全连接层,x_decoded_mean为重构的输出

decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

执行

自定义总的损失函数并编译模型,然后加载数据并训练,CPU训练的速度还算能忍


In [3]:
# 自定义总的损失函数并编译模型

def vae_loss(x, x_decoded_mean):
    xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean)
    kl_loss = -0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
    return xent_loss + kl_loss

vae = Model(x, x_decoded_mean)
vae.compile(optimizer='rmsprop', loss=vae_loss)

# 加载数据并训练,CPU训练的速度还算能忍

(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

vae.fit(x_train, x_train,
        shuffle=True,
        epochs=epochs,
        batch_size=batch_size,
        validation_data=(x_test, x_test))


Downloading data from http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 11us/step
Downloading data from http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 190s 7us/step
Downloading data from http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 1us/step
Downloading data from http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 23s 5us/step
Train on 60000 samples, validate on 10000 samples
Epoch 1/30
60000/60000 [==============================] - 8s 140us/step - loss: 327.1910 - val_loss: 296.7529
Epoch 2/30
60000/60000 [==============================] - 7s 120us/step - loss: 288.6379 - val_loss: 285.2014
Epoch 3/30
60000/60000 [==============================] - 7s 109us/step - loss: 279.1963 - val_loss: 276.7499
Epoch 4/30
60000/60000 [==============================] - 7s 117us/step - loss: 275.5596 - val_loss: 273.9428
Epoch 5/30
60000/60000 [==============================] - 7s 118us/step - loss: 273.3763 - val_loss: 275.2061
Epoch 6/30
60000/60000 [==============================] - 8s 139us/step - loss: 271.8620 - val_loss: 272.5867
Epoch 7/30
60000/60000 [==============================] - 9s 145us/step - loss: 270.7743 - val_loss: 272.3646
Epoch 8/30
60000/60000 [==============================] - 8s 129us/step - loss: 269.8886 - val_loss: 269.6622
Epoch 9/30
60000/60000 [==============================] - 8s 139us/step - loss: 269.2010 - val_loss: 269.8309
Epoch 10/30
60000/60000 [==============================] - 8s 131us/step - loss: 268.6267 - val_loss: 269.7462
Epoch 11/30
60000/60000 [==============================] - 7s 112us/step - loss: 268.1516 - val_loss: 268.6689
Epoch 12/30
60000/60000 [==============================] - 6s 108us/step - loss: 267.7092 - val_loss: 270.7415
Epoch 13/30
60000/60000 [==============================] - 6s 105us/step - loss: 267.3236 - val_loss: 268.2933
Epoch 14/30
60000/60000 [==============================] - 6s 102us/step - loss: 266.9723 - val_loss: 268.4266
Epoch 15/30
60000/60000 [==============================] - 6s 100us/step - loss: 266.6666 - val_loss: 269.5921
Epoch 16/30
60000/60000 [==============================] - 6s 100us/step - loss: 266.3498 - val_loss: 267.1988
Epoch 17/30
60000/60000 [==============================] - 6s 102us/step - loss: 266.0892 - val_loss: 266.7300
Epoch 18/30
60000/60000 [==============================] - 8s 125us/step - loss: 265.8761 - val_loss: 266.8896
Epoch 19/30
60000/60000 [==============================] - 7s 119us/step - loss: 265.6424 - val_loss: 267.7955
Epoch 20/30
60000/60000 [==============================] - 7s 112us/step - loss: 265.4490 - val_loss: 266.3908
Epoch 21/30
60000/60000 [==============================] - 6s 99us/step - loss: 265.2790 - val_loss: 267.1181
Epoch 22/30
60000/60000 [==============================] - 6s 98us/step - loss: 265.0728 - val_loss: 267.7615
Epoch 23/30
60000/60000 [==============================] - 6s 107us/step - loss: 264.9087 - val_loss: 267.0530
Epoch 24/30
60000/60000 [==============================] - 6s 98us/step - loss: 264.7850 - val_loss: 267.0665
Epoch 25/30
60000/60000 [==============================] - 6s 98us/step - loss: 264.6306 - val_loss: 265.8986
Epoch 26/30
60000/60000 [==============================] - 6s 98us/step - loss: 264.4828 - val_loss: 266.5569
Epoch 27/30
60000/60000 [==============================] - 6s 95us/step - loss: 264.3544 - val_loss: 265.2389
Epoch 28/30
60000/60000 [==============================] - 6s 97us/step - loss: 264.1958 - val_loss: 265.3972
Epoch 29/30
60000/60000 [==============================] - 6s 98us/step - loss: 264.0662 - val_loss: 265.1075
Epoch 30/30
60000/60000 [==============================] - 6s 96us/step - loss: 263.9377 - val_loss: 266.0345
Out[3]:
<keras.callbacks.History at 0x12cc81f28>

In [4]:
# 定义一个encoder,看看MNIST中的数据在隐层中变成了什么样子

encoder = Model(x, z_mean)

x_test_encoded = encoder.predict(x_test, batch_size=batch_size)
plt.figure(figsize=(6, 6))
plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test)
plt.colorbar()
plt.show()

# 结果如下,说明在二维的隐层中,不同的数字被很好地分开了



In [5]:
# 再定义一个生成器,从隐层到输出,用于产生新的样本

decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)

# 用网格化的方法产生一些二维数据,作为新的 z 输入到生成器,并将生成的 x 展示出来

n = 20
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
grid_x = np.linspace(-4, 4, n)
grid_y = np.linspace(-4, 4, n)

for i, xi in enumerate(grid_x):
    for j, yi in enumerate(grid_y):
        z_sample = np.array([[yi, xi]])
        x_decoded = generator.predict(z_sample)
        digit = x_decoded[0].reshape(digit_size, digit_size)
        figure[(n - i - 1) * digit_size: (n - i) * digit_size,
               j * digit_size: (j + 1) * digit_size] = digit

plt.figure(figsize=(10, 10))
plt.imshow(figure)
plt.show()

# 结果如下,和之前看到的隐层图是一致的,甚至能看到一些数字之间的过渡态



In [ ]: