Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".


In [46]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)


Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.


In [47]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')


Out[47]:
<matplotlib.image.AxesImage at 0x1e893dad4a8>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.


In [48]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))


Out[48]:
<matplotlib.image.AxesImage at 0x1e886a7ef60>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [30]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)


In [31]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    real_input = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
    z_input = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
    learning_rate = tf.placeholder(tf.float32, name='learning_rate')
    

    return real_input, z_input, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)


Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).


In [80]:
def discriminator(images, reuse=False, alpha=0.2):
    """
    Create the discriminator network
    :param images: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    with tf.variable_scope('discriminator', reuse=reuse):
        # first convolution is 14x14x56
        out1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out1 = tf.maximum(out1 * alpha, out1)
        
        # second convolution is 7x7x112
        out2 = tf.layers.conv2d(out1, 128, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out2 = tf.layers.batch_normalization(out2, training=True)
        out2 = tf.maximum(out2 * alpha, out2)
        
        # third convolution is also 4x4x256
        out3 = tf.layers.conv2d(out2, 256, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out3 = tf.layers.batch_normalization(out3, training=True)
        out3 = tf.maximum(out3 * alpha, out3)
        
        flat = tf.reshape(out3, (-1, 4*4*256))
        
        
        logits = tf.layers.dense(flat, 1)
        out = tf.sigmoid(logits)
        #print(out2.shape)

        return out, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)


Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.


In [92]:
def generator(z, out_channel_dim, is_train=True, alpha=0.2):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    with tf.variable_scope('generator', reuse=not is_train):
        #print(z.shape)
        # 4x4x256
        out1 = tf.layers.dense(z, 7*7*256)
        out1 = tf.reshape(out1, (-1, 7, 7, 256))
        out1 = tf.layers.batch_normalization(out1, training=is_train)
        out1 = tf.maximum(out1 * alpha, out1)
        #out1 = tf.nn.dropout(out1, 0.5)
        
        # Deconvolution 12x12x112
        out2 = tf.layers.conv2d_transpose(out1, 128, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out2 = tf.layers.batch_normalization(out2, training=is_train)
        out2 = tf.maximum(out2 * alpha, out2)
        #out2 = tf.nn.dropout(out2, 0.5)
        
        # Deconvolution 24x24x56
        out3 = tf.layers.conv2d_transpose(out2, 64, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out3 = tf.layers.batch_normalization(out3, training=is_train)
        out3 = tf.maximum(out3 * alpha, out3)
        #out3 = tf.nn.dropout(out3, 0.5)
        
        # Output layer, 28x28x5
        logits = tf.layers.conv2d_transpose(out3, out_channel_dim, 5, strides=1, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        
        
        #print(logits.shape)
        
        out = tf.tanh(logits)
        
        return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)


Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)

In [82]:
def model_loss(input_real, input_z, out_channel_dim, alpha=0.2, smooth=0.1):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    g_model = generator(input_z, out_channel_dim, alpha=alpha)
    d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
    d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
    
    d_loss_real = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real) * (1-smooth)))
    d_loss_fake = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
    g_loss = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))

    d_loss = d_loss_real + d_loss_fake

    return d_loss, g_loss


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)


Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).


In [83]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    tf_vars = tf.trainable_variables()
    dis_vars = [var for var in tf_vars if var.name.startswith('discriminator')]
    gen_vars = [var for var in tf_vars if var.name.startswith('generator')]
    
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        dis_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=dis_vars)
        gen_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=gen_vars)
    
    return dis_train_opt, gen_train_opt


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)


Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.


In [84]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.


In [87]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    input_real, input_z, learning_rate_tf = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
    d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
    d_train_opt, g_train_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
    
    current_evl = 0
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            print("Current Epoch {}...".format(epoch_i+1))
            for batch_images in get_batches(batch_size):
                current_evl += 1
                # TODO: Train Model
                batch_z = np.random.uniform(-1, 1, size=[batch_size, z_dim])
                
                batch_images *= 2
                
                # Running optimizers
                _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z:batch_z})
                _ = sess.run(g_train_opt, feed_dict={input_z:batch_z, input_real: batch_images})
                _ = sess.run(g_train_opt, feed_dict={input_z: batch_z, input_real: batch_images})
                
                if current_evl%5 == 0:
                    train_loss_d = d_loss.eval({input_z:batch_z, input_real: batch_images})
                    train_loss_g = g_loss.eval({input_z:batch_z})
                    print("Current Epoch {}/{} \n".format(epoch_i+1, epoch_count),
                          "Discriminator Loss is : {:.4f} \n".format(train_loss_d),
                          "Generator Loss is : {:.4f}".format(train_loss_g))
                    if current_evl%25 == 0:
                        show_generator_output(sess, 25, input_z, data_shape[3], data_image_mode)
                    print("\n\n")

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.


In [90]:
batch_size = 128
z_dim = 256
learning_rate = 0.0001
beta1 = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

#mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)


Current Epoch 1...
Current Epoch 1/2 
 Discriminator Loss is : 0.8838 
 Generator Loss is : 1.1515



Current Epoch 1/2 
 Discriminator Loss is : 0.5947 
 Generator Loss is : 1.7733



Current Epoch 1/2 
 Discriminator Loss is : 0.6234 
 Generator Loss is : 1.7531



Current Epoch 1/2 
 Discriminator Loss is : 0.5632 
 Generator Loss is : 2.0472



Current Epoch 1/2 
 Discriminator Loss is : 0.6089 
 Generator Loss is : 1.8528


Current Epoch 1/2 
 Discriminator Loss is : 0.6890 
 Generator Loss is : 1.7135



Current Epoch 1/2 
 Discriminator Loss is : 0.7984 
 Generator Loss is : 1.6617



Current Epoch 1/2 
 Discriminator Loss is : 0.7715 
 Generator Loss is : 1.5823



Current Epoch 1/2 
 Discriminator Loss is : 0.8004 
 Generator Loss is : 1.4205



Current Epoch 1/2 
 Discriminator Loss is : 0.7876 
 Generator Loss is : 1.4180


Current Epoch 1/2 
 Discriminator Loss is : 0.6305 
 Generator Loss is : 1.9816



Current Epoch 1/2 
 Discriminator Loss is : 0.8217 
 Generator Loss is : 1.3229



Current Epoch 1/2 
 Discriminator Loss is : 0.5790 
 Generator Loss is : 2.2231



Current Epoch 1/2 
 Discriminator Loss is : 0.5701 
 Generator Loss is : 2.2108



Current Epoch 1/2 
 Discriminator Loss is : 0.5825 
 Generator Loss is : 1.9723


Current Epoch 1/2 
 Discriminator Loss is : 0.5268 
 Generator Loss is : 2.4254



Current Epoch 1/2 
 Discriminator Loss is : 0.5198 
 Generator Loss is : 2.4020



Current Epoch 1/2 
 Discriminator Loss is : 0.5911 
 Generator Loss is : 2.3119



Current Epoch 1/2 
 Discriminator Loss is : 0.5507 
 Generator Loss is : 2.0705



Current Epoch 1/2 
 Discriminator Loss is : 0.5566 
 Generator Loss is : 2.3581


Current Epoch 1/2 
 Discriminator Loss is : 0.6049 
 Generator Loss is : 1.8410



Current Epoch 1/2 
 Discriminator Loss is : 0.6522 
 Generator Loss is : 2.1543



Current Epoch 1/2 
 Discriminator Loss is : 0.5808 
 Generator Loss is : 2.3821



Current Epoch 1/2 
 Discriminator Loss is : 0.6190 
 Generator Loss is : 2.4912



Current Epoch 1/2 
 Discriminator Loss is : 0.6010 
 Generator Loss is : 2.4119


Current Epoch 1/2 
 Discriminator Loss is : 0.6558 
 Generator Loss is : 1.7942



Current Epoch 1/2 
 Discriminator Loss is : 0.7104 
 Generator Loss is : 1.4166



Current Epoch 1/2 
 Discriminator Loss is : 0.7218 
 Generator Loss is : 1.3981



Current Epoch 1/2 
 Discriminator Loss is : 0.6018 
 Generator Loss is : 2.0667



Current Epoch 1/2 
 Discriminator Loss is : 0.5831 
 Generator Loss is : 2.1990


Current Epoch 1/2 
 Discriminator Loss is : 0.6090 
 Generator Loss is : 1.9189



Current Epoch 1/2 
 Discriminator Loss is : 0.6711 
 Generator Loss is : 1.7178



Current Epoch 1/2 
 Discriminator Loss is : 0.5689 
 Generator Loss is : 2.0694



Current Epoch 1/2 
 Discriminator Loss is : 0.5498 
 Generator Loss is : 2.2915



Current Epoch 1/2 
 Discriminator Loss is : 0.5913 
 Generator Loss is : 2.0090


Current Epoch 1/2 
 Discriminator Loss is : 0.6274 
 Generator Loss is : 1.6421



Current Epoch 1/2 
 Discriminator Loss is : 0.5851 
 Generator Loss is : 2.0729



Current Epoch 1/2 
 Discriminator Loss is : 0.5490 
 Generator Loss is : 2.7646



Current Epoch 1/2 
 Discriminator Loss is : 0.7308 
 Generator Loss is : 2.7695



Current Epoch 1/2 
 Discriminator Loss is : 0.5416 
 Generator Loss is : 1.9976


Current Epoch 1/2 
 Discriminator Loss is : 0.5598 
 Generator Loss is : 1.9342



Current Epoch 1/2 
 Discriminator Loss is : 0.5433 
 Generator Loss is : 2.0003



Current Epoch 1/2 
 Discriminator Loss is : 0.8141 
 Generator Loss is : 1.2325



Current Epoch 1/2 
 Discriminator Loss is : 0.5396 
 Generator Loss is : 2.0332



Current Epoch 1/2 
 Discriminator Loss is : 0.5306 
 Generator Loss is : 2.4981


Current Epoch 1/2 
 Discriminator Loss is : 0.5376 
 Generator Loss is : 2.0273



Current Epoch 1/2 
 Discriminator Loss is : 0.5217 
 Generator Loss is : 2.3296



Current Epoch 1/2 
 Discriminator Loss is : 0.5168 
 Generator Loss is : 3.0018



Current Epoch 1/2 
 Discriminator Loss is : 0.6879 
 Generator Loss is : 1.3958



Current Epoch 1/2 
 Discriminator Loss is : 0.5218 
 Generator Loss is : 2.8060


Current Epoch 1/2 
 Discriminator Loss is : 0.5842 
 Generator Loss is : 1.9054



Current Epoch 1/2 
 Discriminator Loss is : 0.5619 
 Generator Loss is : 2.4161



Current Epoch 1/2 
 Discriminator Loss is : 0.7812 
 Generator Loss is : 1.1171



Current Epoch 1/2 
 Discriminator Loss is : 0.5153 
 Generator Loss is : 2.3442



Current Epoch 1/2 
 Discriminator Loss is : 0.5493 
 Generator Loss is : 2.0686


Current Epoch 1/2 
 Discriminator Loss is : 0.5985 
 Generator Loss is : 2.1853



Current Epoch 1/2 
 Discriminator Loss is : 0.5834 
 Generator Loss is : 2.3623



Current Epoch 1/2 
 Discriminator Loss is : 0.5463 
 Generator Loss is : 2.2503



Current Epoch 1/2 
 Discriminator Loss is : 0.5660 
 Generator Loss is : 2.2172



Current Epoch 1/2 
 Discriminator Loss is : 0.5798 
 Generator Loss is : 2.5697


Current Epoch 1/2 
 Discriminator Loss is : 0.9801 
 Generator Loss is : 0.8320



Current Epoch 1/2 
 Discriminator Loss is : 0.6914 
 Generator Loss is : 1.6559



Current Epoch 1/2 
 Discriminator Loss is : 0.8742 
 Generator Loss is : 1.0245



Current Epoch 1/2 
 Discriminator Loss is : 0.6341 
 Generator Loss is : 1.8579



Current Epoch 1/2 
 Discriminator Loss is : 1.0390 
 Generator Loss is : 2.8223


Current Epoch 1/2 
 Discriminator Loss is : 0.7931 
 Generator Loss is : 1.6768



Current Epoch 1/2 
 Discriminator Loss is : 0.7792 
 Generator Loss is : 1.5181



Current Epoch 1/2 
 Discriminator Loss is : 0.8110 
 Generator Loss is : 1.7590



Current Epoch 1/2 
 Discriminator Loss is : 0.6901 
 Generator Loss is : 1.9109



Current Epoch 1/2 
 Discriminator Loss is : 0.7123 
 Generator Loss is : 1.9024


Current Epoch 1/2 
 Discriminator Loss is : 0.8992 
 Generator Loss is : 2.0859



Current Epoch 1/2 
 Discriminator Loss is : 0.6574 
 Generator Loss is : 1.8700



Current Epoch 1/2 
 Discriminator Loss is : 0.7772 
 Generator Loss is : 1.4782



Current Epoch 1/2 
 Discriminator Loss is : 0.6572 
 Generator Loss is : 2.0683



Current Epoch 1/2 
 Discriminator Loss is : 0.9810 
 Generator Loss is : 0.8907


Current Epoch 1/2 
 Discriminator Loss is : 0.7696 
 Generator Loss is : 1.3603



Current Epoch 1/2 
 Discriminator Loss is : 0.8370 
 Generator Loss is : 1.2893



Current Epoch 1/2 
 Discriminator Loss is : 0.7659 
 Generator Loss is : 1.3481



Current Epoch 1/2 
 Discriminator Loss is : 0.7852 
 Generator Loss is : 1.4321



Current Epoch 1/2 
 Discriminator Loss is : 0.6995 
 Generator Loss is : 1.9119


Current Epoch 1/2 
 Discriminator Loss is : 0.6583 
 Generator Loss is : 1.7934



Current Epoch 1/2 
 Discriminator Loss is : 0.9553 
 Generator Loss is : 2.5923



Current Epoch 1/2 
 Discriminator Loss is : 0.7112 
 Generator Loss is : 1.8507



Current Epoch 1/2 
 Discriminator Loss is : 1.3820 
 Generator Loss is : 0.5811



Current Epoch 1/2 
 Discriminator Loss is : 0.6222 
 Generator Loss is : 1.9444


Current Epoch 1/2 
 Discriminator Loss is : 0.9512 
 Generator Loss is : 2.5168



Current Epoch 1/2 
 Discriminator Loss is : 0.9208 
 Generator Loss is : 1.2926



Current Epoch 1/2 
 Discriminator Loss is : 0.6967 
 Generator Loss is : 1.4988



Current Epoch 1/2 
 Discriminator Loss is : 0.8876 
 Generator Loss is : 1.0481



Current Epoch 1/2 
 Discriminator Loss is : 0.7767 
 Generator Loss is : 2.1671


Current Epoch 1/2 
 Discriminator Loss is : 0.7763 
 Generator Loss is : 1.8164



Current Epoch 1/2 
 Discriminator Loss is : 0.6611 
 Generator Loss is : 2.2123



Current Epoch 1/2 
 Discriminator Loss is : 0.7671 
 Generator Loss is : 1.6849



Current Epoch 2...
Current Epoch 2/2 
 Discriminator Loss is : 0.8242 
 Generator Loss is : 1.7637



Current Epoch 2/2 
 Discriminator Loss is : 0.8620 
 Generator Loss is : 1.7471


Current Epoch 2/2 
 Discriminator Loss is : 0.6580 
 Generator Loss is : 2.1451



Current Epoch 2/2 
 Discriminator Loss is : 0.9286 
 Generator Loss is : 1.0091



Current Epoch 2/2 
 Discriminator Loss is : 0.7694 
 Generator Loss is : 1.8731



Current Epoch 2/2 
 Discriminator Loss is : 0.9791 
 Generator Loss is : 1.9980



Current Epoch 2/2 
 Discriminator Loss is : 0.7608 
 Generator Loss is : 1.8083


Current Epoch 2/2 
 Discriminator Loss is : 0.9740 
 Generator Loss is : 2.3271



Current Epoch 2/2 
 Discriminator Loss is : 0.7167 
 Generator Loss is : 1.6943



Current Epoch 2/2 
 Discriminator Loss is : 0.7058 
 Generator Loss is : 1.6402



Current Epoch 2/2 
 Discriminator Loss is : 0.8459 
 Generator Loss is : 1.2381



Current Epoch 2/2 
 Discriminator Loss is : 0.6523 
 Generator Loss is : 1.9193


Current Epoch 2/2 
 Discriminator Loss is : 0.6773 
 Generator Loss is : 1.8750



Current Epoch 2/2 
 Discriminator Loss is : 0.9499 
 Generator Loss is : 2.3204



Current Epoch 2/2 
 Discriminator Loss is : 0.6799 
 Generator Loss is : 1.8597



Current Epoch 2/2 
 Discriminator Loss is : 0.7057 
 Generator Loss is : 1.8527



Current Epoch 2/2 
 Discriminator Loss is : 1.0397 
 Generator Loss is : 2.7849


Current Epoch 2/2 
 Discriminator Loss is : 0.6505 
 Generator Loss is : 2.1017



Current Epoch 2/2 
 Discriminator Loss is : 0.8510 
 Generator Loss is : 1.1938



Current Epoch 2/2 
 Discriminator Loss is : 0.6586 
 Generator Loss is : 2.2802



Current Epoch 2/2 
 Discriminator Loss is : 0.6939 
 Generator Loss is : 1.4607



Current Epoch 2/2 
 Discriminator Loss is : 0.8132 
 Generator Loss is : 1.3107


Current Epoch 2/2 
 Discriminator Loss is : 0.7818 
 Generator Loss is : 1.9531



Current Epoch 2/2 
 Discriminator Loss is : 0.6900 
 Generator Loss is : 1.7846



Current Epoch 2/2 
 Discriminator Loss is : 1.0960 
 Generator Loss is : 0.8011



Current Epoch 2/2 
 Discriminator Loss is : 0.6754 
 Generator Loss is : 2.0493



Current Epoch 2/2 
 Discriminator Loss is : 0.6140 
 Generator Loss is : 2.1469


Current Epoch 2/2 
 Discriminator Loss is : 0.6397 
 Generator Loss is : 1.8683



Current Epoch 2/2 
 Discriminator Loss is : 0.6367 
 Generator Loss is : 1.8908



Current Epoch 2/2 
 Discriminator Loss is : 0.8119 
 Generator Loss is : 2.0417



Current Epoch 2/2 
 Discriminator Loss is : 0.7065 
 Generator Loss is : 1.5814



Current Epoch 2/2 
 Discriminator Loss is : 0.8087 
 Generator Loss is : 1.3156


Current Epoch 2/2 
 Discriminator Loss is : 0.8688 
 Generator Loss is : 1.2332



Current Epoch 2/2 
 Discriminator Loss is : 0.8181 
 Generator Loss is : 1.3443



Current Epoch 2/2 
 Discriminator Loss is : 0.8616 
 Generator Loss is : 1.1499



Current Epoch 2/2 
 Discriminator Loss is : 0.7736 
 Generator Loss is : 1.3444



Current Epoch 2/2 
 Discriminator Loss is : 0.8423 
 Generator Loss is : 1.2114


Current Epoch 2/2 
 Discriminator Loss is : 0.6478 
 Generator Loss is : 1.8136



Current Epoch 2/2 
 Discriminator Loss is : 1.0253 
 Generator Loss is : 0.8790



Current Epoch 2/2 
 Discriminator Loss is : 0.8667 
 Generator Loss is : 1.0814



Current Epoch 2/2 
 Discriminator Loss is : 0.7655 
 Generator Loss is : 1.9819



Current Epoch 2/2 
 Discriminator Loss is : 1.1305 
 Generator Loss is : 0.8082


Current Epoch 2/2 
 Discriminator Loss is : 0.8498 
 Generator Loss is : 1.4938



Current Epoch 2/2 
 Discriminator Loss is : 0.6971 
 Generator Loss is : 2.3341



Current Epoch 2/2 
 Discriminator Loss is : 0.6789 
 Generator Loss is : 1.7451



Current Epoch 2/2 
 Discriminator Loss is : 0.7348 
 Generator Loss is : 1.7844



Current Epoch 2/2 
 Discriminator Loss is : 1.0349 
 Generator Loss is : 1.6598


Current Epoch 2/2 
 Discriminator Loss is : 0.9652 
 Generator Loss is : 2.6253



Current Epoch 2/2 
 Discriminator Loss is : 0.7983 
 Generator Loss is : 1.5236



Current Epoch 2/2 
 Discriminator Loss is : 0.7326 
 Generator Loss is : 1.5851



Current Epoch 2/2 
 Discriminator Loss is : 0.6631 
 Generator Loss is : 2.0303



Current Epoch 2/2 
 Discriminator Loss is : 0.8494 
 Generator Loss is : 1.8338


Current Epoch 2/2 
 Discriminator Loss is : 0.8558 
 Generator Loss is : 1.5182



Current Epoch 2/2 
 Discriminator Loss is : 0.7134 
 Generator Loss is : 1.7917



Current Epoch 2/2 
 Discriminator Loss is : 0.7805 
 Generator Loss is : 1.5569



Current Epoch 2/2 
 Discriminator Loss is : 0.8607 
 Generator Loss is : 1.6576



Current Epoch 2/2 
 Discriminator Loss is : 0.7100 
 Generator Loss is : 1.4837


Current Epoch 2/2 
 Discriminator Loss is : 1.1413 
 Generator Loss is : 0.7493



Current Epoch 2/2 
 Discriminator Loss is : 0.8613 
 Generator Loss is : 1.2506



Current Epoch 2/2 
 Discriminator Loss is : 0.6633 
 Generator Loss is : 1.9855



Current Epoch 2/2 
 Discriminator Loss is : 0.7153 
 Generator Loss is : 2.1431



Current Epoch 2/2 
 Discriminator Loss is : 0.6700 
 Generator Loss is : 1.6303


Current Epoch 2/2 
 Discriminator Loss is : 0.8936 
 Generator Loss is : 2.1309



Current Epoch 2/2 
 Discriminator Loss is : 0.7858 
 Generator Loss is : 1.4272



Current Epoch 2/2 
 Discriminator Loss is : 0.8300 
 Generator Loss is : 1.4388



Current Epoch 2/2 
 Discriminator Loss is : 0.8405 
 Generator Loss is : 1.7931



Current Epoch 2/2 
 Discriminator Loss is : 0.9134 
 Generator Loss is : 2.0867


Current Epoch 2/2 
 Discriminator Loss is : 0.8276 
 Generator Loss is : 1.4481



Current Epoch 2/2 
 Discriminator Loss is : 0.8674 
 Generator Loss is : 1.3320



Current Epoch 2/2 
 Discriminator Loss is : 0.7425 
 Generator Loss is : 1.8408



Current Epoch 2/2 
 Discriminator Loss is : 0.9448 
 Generator Loss is : 2.2642



Current Epoch 2/2 
 Discriminator Loss is : 0.7854 
 Generator Loss is : 1.3534


Current Epoch 2/2 
 Discriminator Loss is : 0.7109 
 Generator Loss is : 1.7942



Current Epoch 2/2 
 Discriminator Loss is : 0.7674 
 Generator Loss is : 1.5561



Current Epoch 2/2 
 Discriminator Loss is : 0.7009 
 Generator Loss is : 1.6810



Current Epoch 2/2 
 Discriminator Loss is : 0.7044 
 Generator Loss is : 2.1727



Current Epoch 2/2 
 Discriminator Loss is : 0.9194 
 Generator Loss is : 1.2349


Current Epoch 2/2 
 Discriminator Loss is : 0.8312 
 Generator Loss is : 1.7913



Current Epoch 2/2 
 Discriminator Loss is : 0.8485 
 Generator Loss is : 1.2494



Current Epoch 2/2 
 Discriminator Loss is : 0.8539 
 Generator Loss is : 1.3669



Current Epoch 2/2 
 Discriminator Loss is : 0.9122 
 Generator Loss is : 1.7198



Current Epoch 2/2 
 Discriminator Loss is : 0.7780 
 Generator Loss is : 1.8661


Current Epoch 2/2 
 Discriminator Loss is : 0.6906 
 Generator Loss is : 1.6993



Current Epoch 2/2 
 Discriminator Loss is : 0.7639 
 Generator Loss is : 1.4482



Current Epoch 2/2 
 Discriminator Loss is : 0.7881 
 Generator Loss is : 1.7882



Current Epoch 2/2 
 Discriminator Loss is : 0.6847 
 Generator Loss is : 1.6184



Current Epoch 2/2 
 Discriminator Loss is : 1.3704 
 Generator Loss is : 0.6412


Current Epoch 2/2 
 Discriminator Loss is : 1.2197 
 Generator Loss is : 0.7400



Current Epoch 2/2 
 Discriminator Loss is : 1.0226 
 Generator Loss is : 0.8604



Current Epoch 2/2 
 Discriminator Loss is : 0.7156 
 Generator Loss is : 1.9139



Current Epoch 2/2 
 Discriminator Loss is : 1.0641 
 Generator Loss is : 2.3730



Current Epoch 2/2 
 Discriminator Loss is : 1.1468 
 Generator Loss is : 2.2227


Current Epoch 2/2 
 Discriminator Loss is : 0.7203 
 Generator Loss is : 1.6656



Current Epoch 2/2 
 Discriminator Loss is : 0.8301 
 Generator Loss is : 1.2565



CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.


In [93]:
batch_size = 256
z_dim = 256
learning_rate = 0.0002
beta1 = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

#celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)


Current Epoch 1...
Current Epoch 1/1 
 Discriminator Loss is : 3.2611 
 Generator Loss is : 0.0761



Current Epoch 1/1 
 Discriminator Loss is : 3.0408 
 Generator Loss is : 0.1014



Current Epoch 1/1 
 Discriminator Loss is : 2.6580 
 Generator Loss is : 0.1648



Current Epoch 1/1 
 Discriminator Loss is : 2.5553 
 Generator Loss is : 0.2017



Current Epoch 1/1 
 Discriminator Loss is : 2.0745 
 Generator Loss is : 0.3183


Current Epoch 1/1 
 Discriminator Loss is : 1.6158 
 Generator Loss is : 0.4546



Current Epoch 1/1 
 Discriminator Loss is : 1.4124 
 Generator Loss is : 0.6785



Current Epoch 1/1 
 Discriminator Loss is : 1.5337 
 Generator Loss is : 0.6031



Current Epoch 1/1 
 Discriminator Loss is : 1.6259 
 Generator Loss is : 0.4785



Current Epoch 1/1 
 Discriminator Loss is : 1.3603 
 Generator Loss is : 1.0731


Current Epoch 1/1 
 Discriminator Loss is : 1.4492 
 Generator Loss is : 1.1233



Current Epoch 1/1 
 Discriminator Loss is : 1.5605 
 Generator Loss is : 0.5743



Current Epoch 1/1 
 Discriminator Loss is : 1.5413 
 Generator Loss is : 0.6790



Current Epoch 1/1 
 Discriminator Loss is : 1.6471 
 Generator Loss is : 0.5917



Current Epoch 1/1 
 Discriminator Loss is : 1.6944 
 Generator Loss is : 0.5965


Current Epoch 1/1 
 Discriminator Loss is : 1.4983 
 Generator Loss is : 1.3387



Current Epoch 1/1 
 Discriminator Loss is : 1.7392 
 Generator Loss is : 0.5489



Current Epoch 1/1 
 Discriminator Loss is : 1.7833 
 Generator Loss is : 0.5179



Current Epoch 1/1 
 Discriminator Loss is : 1.8395 
 Generator Loss is : 0.5866



Current Epoch 1/1 
 Discriminator Loss is : 1.5833 
 Generator Loss is : 0.8213


Current Epoch 1/1 
 Discriminator Loss is : 1.7761 
 Generator Loss is : 0.5829



Current Epoch 1/1 
 Discriminator Loss is : 1.6459 
 Generator Loss is : 0.6213



Current Epoch 1/1 
 Discriminator Loss is : 1.8470 
 Generator Loss is : 0.5017



Current Epoch 1/1 
 Discriminator Loss is : 1.7422 
 Generator Loss is : 0.6163



Current Epoch 1/1 
 Discriminator Loss is : 1.7515 
 Generator Loss is : 0.5291


Current Epoch 1/1 
 Discriminator Loss is : 1.8311 
 Generator Loss is : 0.4759



Current Epoch 1/1 
 Discriminator Loss is : 1.6920 
 Generator Loss is : 0.5104



Current Epoch 1/1 
 Discriminator Loss is : 1.5924 
 Generator Loss is : 0.6677



Current Epoch 1/1 
 Discriminator Loss is : 1.6075 
 Generator Loss is : 0.7996



Current Epoch 1/1 
 Discriminator Loss is : 1.6735 
 Generator Loss is : 0.5889


Current Epoch 1/1 
 Discriminator Loss is : 1.6372 
 Generator Loss is : 0.5231



Current Epoch 1/1 
 Discriminator Loss is : 1.7058 
 Generator Loss is : 0.6674



Current Epoch 1/1 
 Discriminator Loss is : 1.5889 
 Generator Loss is : 0.6947



Current Epoch 1/1 
 Discriminator Loss is : 1.8084 
 Generator Loss is : 0.6818



Current Epoch 1/1 
 Discriminator Loss is : 1.6169 
 Generator Loss is : 0.6430


Current Epoch 1/1 
 Discriminator Loss is : 1.7143 
 Generator Loss is : 0.6185



Current Epoch 1/1 
 Discriminator Loss is : 1.6607 
 Generator Loss is : 0.6484



Current Epoch 1/1 
 Discriminator Loss is : 1.6374 
 Generator Loss is : 0.7022



Current Epoch 1/1 
 Discriminator Loss is : 1.7023 
 Generator Loss is : 0.5867



Current Epoch 1/1 
 Discriminator Loss is : 1.5349 
 Generator Loss is : 0.7247


Current Epoch 1/1 
 Discriminator Loss is : 1.4871 
 Generator Loss is : 0.6970



Current Epoch 1/1 
 Discriminator Loss is : 1.5025 
 Generator Loss is : 0.6923



Current Epoch 1/1 
 Discriminator Loss is : 1.5941 
 Generator Loss is : 0.5799



Current Epoch 1/1 
 Discriminator Loss is : 1.5558 
 Generator Loss is : 0.6116



Current Epoch 1/1 
 Discriminator Loss is : 1.5568 
 Generator Loss is : 0.6759


Current Epoch 1/1 
 Discriminator Loss is : 1.4525 
 Generator Loss is : 0.8400



Current Epoch 1/1 
 Discriminator Loss is : 1.5122 
 Generator Loss is : 0.7644



Current Epoch 1/1 
 Discriminator Loss is : 1.5825 
 Generator Loss is : 0.6452



Current Epoch 1/1 
 Discriminator Loss is : 1.4853 
 Generator Loss is : 0.7307



Current Epoch 1/1 
 Discriminator Loss is : 1.4362 
 Generator Loss is : 0.8375


Current Epoch 1/1 
 Discriminator Loss is : 1.5693 
 Generator Loss is : 0.6779



Current Epoch 1/1 
 Discriminator Loss is : 1.5107 
 Generator Loss is : 0.7877



Current Epoch 1/1 
 Discriminator Loss is : 1.4490 
 Generator Loss is : 0.7514



Current Epoch 1/1 
 Discriminator Loss is : 1.4873 
 Generator Loss is : 0.6474



Current Epoch 1/1 
 Discriminator Loss is : 1.4674 
 Generator Loss is : 0.6370


Current Epoch 1/1 
 Discriminator Loss is : 1.4558 
 Generator Loss is : 0.7601



Current Epoch 1/1 
 Discriminator Loss is : 1.4794 
 Generator Loss is : 0.6530



Current Epoch 1/1 
 Discriminator Loss is : 1.4475 
 Generator Loss is : 0.7384



Current Epoch 1/1 
 Discriminator Loss is : 1.5128 
 Generator Loss is : 0.6779



Current Epoch 1/1 
 Discriminator Loss is : 1.4486 
 Generator Loss is : 0.7799


Current Epoch 1/1 
 Discriminator Loss is : 1.4538 
 Generator Loss is : 0.6137



Current Epoch 1/1 
 Discriminator Loss is : 1.5729 
 Generator Loss is : 0.5076



Current Epoch 1/1 
 Discriminator Loss is : 1.6069 
 Generator Loss is : 0.4734



Current Epoch 1/1 
 Discriminator Loss is : 1.6049 
 Generator Loss is : 0.4704



Current Epoch 1/1 
 Discriminator Loss is : 1.6265 
 Generator Loss is : 0.5230


Current Epoch 1/1 
 Discriminator Loss is : 1.6400 
 Generator Loss is : 0.3946



Current Epoch 1/1 
 Discriminator Loss is : 1.5589 
 Generator Loss is : 0.5315



Current Epoch 1/1 
 Discriminator Loss is : 1.5693 
 Generator Loss is : 0.7206



Current Epoch 1/1 
 Discriminator Loss is : 1.5840 
 Generator Loss is : 0.6012



Current Epoch 1/1 
 Discriminator Loss is : 1.7017 
 Generator Loss is : 0.3969


Current Epoch 1/1 
 Discriminator Loss is : 1.6158 
 Generator Loss is : 0.4224



Current Epoch 1/1 
 Discriminator Loss is : 1.3001 
 Generator Loss is : 1.0367



Current Epoch 1/1 
 Discriminator Loss is : 1.2694 
 Generator Loss is : 1.2923



Current Epoch 1/1 
 Discriminator Loss is : 1.5515 
 Generator Loss is : 0.4552



Current Epoch 1/1 
 Discriminator Loss is : 1.4652 
 Generator Loss is : 0.8508


Current Epoch 1/1 
 Discriminator Loss is : 1.5548 
 Generator Loss is : 0.7098



Current Epoch 1/1 
 Discriminator Loss is : 1.6548 
 Generator Loss is : 0.5616



Current Epoch 1/1 
 Discriminator Loss is : 1.5256 
 Generator Loss is : 0.8180



Current Epoch 1/1 
 Discriminator Loss is : 1.5304 
 Generator Loss is : 0.6329



Current Epoch 1/1 
 Discriminator Loss is : 1.5490 
 Generator Loss is : 0.5491


Current Epoch 1/1 
 Discriminator Loss is : 1.6463 
 Generator Loss is : 0.4848



Current Epoch 1/1 
 Discriminator Loss is : 1.5614 
 Generator Loss is : 0.6000



Current Epoch 1/1 
 Discriminator Loss is : 1.6856 
 Generator Loss is : 0.3864



Current Epoch 1/1 
 Discriminator Loss is : 1.3976 
 Generator Loss is : 0.7321



Current Epoch 1/1 
 Discriminator Loss is : 1.5464 
 Generator Loss is : 0.6948


Current Epoch 1/1 
 Discriminator Loss is : 1.5046 
 Generator Loss is : 0.7412



Current Epoch 1/1 
 Discriminator Loss is : 1.5159 
 Generator Loss is : 0.6834



Current Epoch 1/1 
 Discriminator Loss is : 1.5320 
 Generator Loss is : 0.6947



Current Epoch 1/1 
 Discriminator Loss is : 1.5391 
 Generator Loss is : 0.7290



Current Epoch 1/1 
 Discriminator Loss is : 1.5118 
 Generator Loss is : 0.7008


Current Epoch 1/1 
 Discriminator Loss is : 1.5307 
 Generator Loss is : 0.7063



Current Epoch 1/1 
 Discriminator Loss is : 1.5329 
 Generator Loss is : 0.6408



Current Epoch 1/1 
 Discriminator Loss is : 1.5000 
 Generator Loss is : 0.6932



Current Epoch 1/1 
 Discriminator Loss is : 1.5550 
 Generator Loss is : 0.6857



Current Epoch 1/1 
 Discriminator Loss is : 1.5361 
 Generator Loss is : 0.6723


Current Epoch 1/1 
 Discriminator Loss is : 1.5102 
 Generator Loss is : 0.6966



Current Epoch 1/1 
 Discriminator Loss is : 1.4835 
 Generator Loss is : 0.7137



Current Epoch 1/1 
 Discriminator Loss is : 1.4919 
 Generator Loss is : 0.6777



Current Epoch 1/1 
 Discriminator Loss is : 1.4700 
 Generator Loss is : 0.6896



Current Epoch 1/1 
 Discriminator Loss is : 1.5732 
 Generator Loss is : 0.6846


Current Epoch 1/1 
 Discriminator Loss is : 1.5251 
 Generator Loss is : 0.6421



Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

My personal oppion

Actually, I didn't approve this result, I had spent more than 2 day to tune the hyper paramenters, and tried servel sturcture to experience the model, but DCGAN seems like the black box. Behold, there's a black magic called DCGAN. But a week ago, I read about an paper about BEGAN. I saw the incrablly result of face generation, so I want to try it.


In [94]:
import glob
import time
import numpy as np
import tensorflow as tf

def conv2d(x, filter_shape, bias=True, stride=1, padding="SAME", name="conv2d"):
    kw, kh, nin, nout = filter_shape
    pad_size = (kw - 1) / 2

    if padding == "VALID":
        x = tf.pad(x, [[0, 0], [pad_size, pad_size], [pad_size, pad_size], [0, 0]], "SYMMETRIC")

    initializer = tf.random_normal_initializer(0., 0.02)
    with tf.variable_scope(name):
        weight = tf.get_variable("weight", shape=filter_shape, initializer=initializer)
        x = tf.nn.conv2d(x, weight, [1, stride, stride, 1], padding=padding)

        if bias:
            b = tf.get_variable("bias", shape=filter_shape[-1], initializer=tf.constant_initializer(0.))
            x = tf.nn.bias_add(x, b)
    return x

def fc(x, output_shape, bias=True, name='fc'):
    shape = x.get_shape().as_list()
    dim = np.prod(shape[1:])
    x = tf.reshape(x, [-1, dim])
    input_shape = dim

    initializer = tf.random_normal_initializer(0., 0.02)
    with tf.variable_scope(name):
        weight = tf.get_variable("weight", shape=[input_shape, output_shape], initializer=initializer)
        x = tf.matmul(x, weight)

        if bias:
            b = tf.get_variable("bias", shape=[output_shape], initializer=tf.constant_initializer(0.))
            x = tf.nn.bias_add(x, b)
    return x

def pool(x, r=2, s=1):
    return tf.nn.avg_pool(x, ksize=[1, r, r, 1], strides=[1, s, s, 1], padding="SAME")


def l1_loss(x, y):
    return tf.reduce_mean(tf.abs(x - y))


def resize_nn(x, size):
    return tf.image.resize_nearest_neighbor(x, size=(int(size), int(size)))

In [95]:
def generator_Bgcen(x, reuse=None, data_size = 64, filter_number=64):
    with tf.variable_scope('generator') as scope:
        if reuse:
            scope.reuse_variables()

        w = data_size
        f = filter_number
        p = "SAME"

        x = fc(x, 8 * 8 * f, name='fc')
        x = tf.reshape(x, [-1, 8, 8, f])

        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv1_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p, name='conv1_b')
        x = tf.nn.elu(x)

        if data_size == 128:
            x = resize_nn(x, w / 8)
            x = conv2d(x, [3, 3, f, f], stride=1,  padding=p, name='conv2_a')
            x = tf.nn.elu(x)
            x = conv2d(x, [3, 3, f, f], stride=1,  padding=p, name='conv2_b')
            x = tf.nn.elu(x)

        x = resize_nn(x, w / 4)
        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p, name='conv3_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p, name='conv3_b')
        x = tf.nn.elu(x)

        x = resize_nn(x, w / 2)
        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p, name='conv4_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p, name='conv4_b')
        x = tf.nn.elu(x)

        x = resize_nn(x, w)
        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p,name='conv5_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p,name='conv5_b')
        x = tf.nn.elu(x)

        x = conv2d(x, [3, 3, f, 3], stride=1,  padding=p,name='conv6_a')
    return x

In [96]:
def encoder(x, reuse=None, data_size=64, filter_number=64, embedding=64):
    with tf.variable_scope('discriminator') as scope:
        if reuse:
            scope.reuse_variables()

        f = filter_number
        h = embedding
        p = "SAME"

        x = conv2d(x, [3, 3, 3, f], stride=1,  padding=p,name='conv1_enc_a')
        x = tf.nn.elu(x)

        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p,name='conv2_enc_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1,  padding=p,name='conv2_enc_b')
        x = tf.nn.elu(x)

        x = conv2d(x, [1, 1, f, 2 * f], stride=1,  padding=p,name='conv3_enc_0')
        x = pool(x, r=2, s=2)
        x = conv2d(x, [3, 3, 2 * f, 2 * f], stride=1,  padding=p,name='conv3_enc_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, 2 * f, 2 * f], stride=1,  padding=p,name='conv3_enc_b')
        x = tf.nn.elu(x)

        x = conv2d(x, [1, 1, 2 * f, 3 * f], stride=1,  padding=p,name='conv4_enc_0')
        x = pool(x, r=2, s=2)
        x = conv2d(x, [3, 3, 3 * f, 3 * f], stride=1,  padding=p,name='conv4_enc_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, 3 * f, 3 * f], stride=1,  padding=p,name='conv4_enc_b')
        x = tf.nn.elu(x)

        x = conv2d(x, [1, 1, 3 * f, 4 * f], stride=1,  padding=p,name='conv5_enc_0')
        x = pool(x, r=2, s=2)
        x = conv2d(x, [3, 3, 4 * f, 4 * f], stride=1,  padding=p,name='conv5_enc_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, 4 * f, 4 * f], stride=1,  padding=p,name='conv5_enc_b')
        x = tf.nn.elu(x)

        if data_size == 128:
            x = conv2d(x, [1, 1, 4 * f, 5 * f], stride=1,  padding=p,name='conv6_enc_0')
            x = pool(x, r=2, s=2)
            x = conv2d(x, [3, 3, 5 * f, 5 * f], stride=1,  padding=p,name='conv6_enc_a')
            x = tf.nn.elu(x)
            x = conv2d(x, [3, 3, 5 * f, 5 * f], stride=1,  padding=p,name='conv6_enc_b')
            x = tf.nn.elu(x)

        x = fc(x, h, name='enc_fc')
    return x

In [97]:
def decoder(x, reuse=None, data_size=64, filter_number=64):
    with tf.variable_scope('discriminator') as scope:
        if reuse:
            scope.reuse_variables()

        w = data_size
        f = filter_number
        p = "SAME"

        x = fc(x, 8 * 8 * f, name='fc')
        x = tf.reshape(x, [-1, 8, 8, f])

        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv1_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv1_b')
        x = tf.nn.elu(x)

        if data_size == 128:
            x = resize_nn(x, w / 8)
            x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv2_a')
            x = tf.nn.elu(x)
            x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv2_b')
            x = tf.nn.elu(x)

            x = resize_nn(x, w / 4)
        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv3_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv3_b')
        x = tf.nn.elu(x)

        x = resize_nn(x, w / 2)
        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv4_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv4_b')
        x = tf.nn.elu(x)

        x = resize_nn(x, w)
        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv5_a')
        x = tf.nn.elu(x)
        x = conv2d(x, [3, 3, f, f], stride=1, padding=p, name='conv5_b')
        x = tf.nn.elu(x)

        x = conv2d(x, [3, 3, f, 3], stride=1, padding=p, name='conv6_a')
    return x

In [62]:
#Misc
flag = True
gpu_number = 0
image_size = 64

# Train Iteration
niter = 50
niter_snapshot = 2440
max_to_keep = 5

#Train Parameter
batch_size = 16
learning_rate = 1e-4
mm = 0.5 #momentum
mm2 = 0.5 #momentum2
lamda = 0.001
gamma = 0.5
filter_number = 64
input_size = 64
embedding = 64

In [98]:
def Model_Input(batch_size, input_size, data_size):
    x = tf.placeholder(tf.float32, shape=[batch_size, input_size], name='x')
    y = tf.placeholder(tf.float32, shape=[batch_size, data_size, data_size, 3], name='y')
    kt = tf.placeholder(tf.float32, name='kt')
    learning_rate = tf.placeholder(tf.float32, name='lr')
    return x, y, kt, learning_rate

In [99]:
def Model_Loss(y, gen, d_real, d_fake, kt, gamma):
    d_real_loss = l1_loss(y, d_real)
    d_fake_loss = l1_loss(gen, d_fake)
    d_loss = d_real_loss - kt * d_fake_loss
    g_loss = mu_gen
    lam = 0.001  # 'learning rate' for k. Berthelot et al. use 0.001
    k_tp = k_t + lam * (gamma * mu_real - mu_gen)
    m_global = d_real_loss + tf.abs(gamma * d_real_loss - d_fake_loss)
    return d_real_loss, d_fake_loss, d_loss, g_loss, k_tp, m_global

In [104]:
def Optimizer_BEGAN(g_vars, d_vars, lr, global_step, g_loss, d_loss):
    opt = tf.train.AdamOptimizer(lr,  epsilon=1.0)
    opt_g_grad = opt.compute_gradients(g_loss, var_list=g_vars)
    opt_d_grad = opt.compute_gradients(d_loss, var_list=d_vars)
    opt_g_train = opt.apply_gradients(opt_g_grad, global_step)
    opt_d_train = opt.apply_gradients(opt_d_grad, global_step)
    return opt_g_train, opt_d_train

In [107]:
merged = None
writer = None
def train_Began(sess_, batch_size, get_batches, input_size, image_size, gamma, lamda, lr, mm, filter_number, data_shape, data_image_mode, embedding, 
          niter, niter_snapshot,  flag=True, Max_to_keep=10):
    _x, _y, _kt, _lr = Model_Input(batch_size, input_size, image_size)
    
    global_step = tf.get_variable('global_step', [],
                                      initializer=tf.constant_initializer(0),
                                      trainable=False)
    
    _recon_gen = generator_Bgcen(_x, filter_number=filter_number, data_size=image_size)
    _d_real = decoder(encoder(_y, data_size=image_size, embedding=embedding, filter_number=filter_number), 
                    filter_number=filter_number, data_size=image_size)
    _d_fake = decoder(encoder(_recon_gen, reuse=True, embedding=embedding, data_size=image_size, filter_number=filter_number), 
                    reuse=True, filter_number=filter_number,data_size=image_size)
    _recon_dec = decoder(_x, reuse=True, filter_number=filter_number, data_size=image_size)
    
    _d_real_loss, _d_fake_loss, _d_loss, _g_loss, k_tp, _m_global = \ 
        Model_Loss(_y, _recon_gen, _d_real, _d_fake, _kt, gamma)
    
    _g_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, "generator")
    _d_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, "discriminator")
    
    _opt_g, _opt_d = Optimizer_BEGAN(_g_vars, _d_vars, _lr, global_step, _g_loss, _d_loss)
    
    sess_.run(tf.global_variables_initializer())
    
    saver =  tf.train.Saver(max_to_keep=(Max_to_keep))
    
    if flag:
        tf.summary.scalar('loss/loss', _d_loss + _g_loss)
        tf.summary.scalar('loss/g_loss', _g_loss)
        tf.summary.scalar('loss/d_loss', _d_loss)
        tf.summary.scalar('loss/d_real_loss', _d_real_loss)
        tf.summary.scalar('loss/d_fake_loss', _d_fake_loss)
        tf.summary.scalar('misc/kt', _kt)
        tf.summary.scalar('misc/m_global', _m_global)
        merged = tf.summary.merge_all()
        writer = tf.summary.FileWriter(".", sess_.graph)
    
    start_time = time.time()
    kt_ = sess.run(tf.get_variable('kt', [],
                                  initializer=tf.constant_initializer(0),
                                  trainable=False))
    lr_ = np.float32(lr)
    count = 0
    
    for epoch in range (niter):
        m = 0
        for batch_images in get_batches(batch_size):
            
            count += 1
            m += 1
            batch_x = np.random.uniform(-1, 1, size=[batch_size, input_size])
            #batch_files = data[idx * self.batch_size: (idx + 1) * self.batch_size]
            #batch_data = [get_image(batch_file) for batch_file in batch_files]
            
            g_opt = [_opt_g, _g_loss, _d_real_loss, _d_fake_loss]
            d_opt = [_opt_d, _d_loss, merged]
            feed_dict = {_x: batch_x, _y: batch_images, _kt: min(max(_kt, 0), 1), _lr: lr}
            
            #train the model
            _, _, loss_g, loss_d, d_real_loss, d_fake_loss = \
                sess_.run(g_opt, d_opt,  feed_dict=feed_dict)
            _, loss_d, summary = sess_.run(d_opt, feed_dict=feed_dict)
            
            
            
            # update kt, m_global
            #kt = np.maximum(np.minimum(1., kt_ + lamda * (gamma * d_real_loss - d_fake_loss)), 0.)
            m_global = d_real_loss + np.abs(gamma * d_real_loss - d_fake_loss)
            loss = loss_g + loss_d
            
            #print("Epoch: [%2d] [%4d/%4d] time: %4.4f, "
            #          "loss: %.4f, loss_g: %.4f, loss_d: %.4f, d_real: %.4f, d_fake: %.4f, kt: %.8f, M: %.8f"
            #          % (epoch, count, m, time.time() - start_time,
            #             loss, loss_g, loss_d, d_real_loss, d_fake_loss, kt, m_global))
            writer.add_summary(summary, count)
            
            if count % niter_snapshot == (niter_snapshot - 1):
                    # update learning rate
                    lr *= 0.95
                    saver.save(sess_, "BEGAN", global_step=count, write_meta_graph=False)
            
            if count%5 == 0:
                    print("Current Epoch {}/{} \n".format(epoch, niter),
                          "Discriminator Loss is : {:.4f} \n".format(loss_d),
                          "Generator Loss is : {:.4f}".format(loss_g))
                    if count%25 == 0:
                        print('\n')
                        test(sess_, batch_size, image_size, input_size, _recon_gen, _recon_dec, _x, data_image_mode, count)
                    print("\n")

def inverse_image(img):
    img = (img + 0.5) * 255.
    img[img > 255] = 255
    img[img < 0] = 0
    img = img[..., ::-1] # bgr to rgb
    return img
                    
def test(sess, batch_size, data_size, input_size, recon_gen, recon_dec, x, image_mode, idx):
        cmap = None if image_mode == 'RGB' else 'gray'
        # generate output
        img_num = batch_size
        img_size = data_size

        output_f = int(np.sqrt(img_num))
        im_output_gen = np.zeros([img_size * output_f, img_size * output_f, 3])

        test_data = np.random.uniform(-1., 1., size=[img_num, input_size])
        output_gen = (sess.run(recon_gen, feed_dict={x: test_data}))  # generator output
        output_dec = (sess.run(recon_dec, feed_dict={x: test_data}))  # decoder output
        
        images_grid = helper.images_square_grid(output_gen, image_mode)
        pyplot.imshow(images_grid, cmap=cmap)
        pyplot.show()

In [108]:
#Misc
flag = True
gpu_number = 0
image_size = 28

# Train Iteration
niter = 2
niter_snapshot = 2440
max_to_keep = 5

#Train Parameter
batch_size = 128
learning_rate = 1e-4
mm = 0.5 #momentum
mm2 = 0.5 #momentum2
lamda = 0.001
gamma = 0.4
filter_number = 28
input_size = 28
embedding = 28

Max_to_keep = 100

#celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    with tf.Session() as sess:
        train_Began(sess, batch_size, celeba_dataset.get_batches, input_size, image_size, gamma, lamda, learning_rate, mm, filter_number,
              celeba_dataset.shape, celeba_dataset.image_mode, embedding, niter, niter_snapshot, flag, Max_to_keep)


Current Epoch 0/2 
 Discriminator Loss is : 0.2167 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2183 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2218 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2247 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2239 
 Generator Loss is : 0.0000



Current Epoch 0/2 
 Discriminator Loss is : 0.2209 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2191 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2274 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2231 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2195 
 Generator Loss is : 0.0000



Current Epoch 0/2 
 Discriminator Loss is : 0.2187 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2265 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2195 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2180 
 Generator Loss is : 0.0000


Current Epoch 0/2 
 Discriminator Loss is : 0.2167 
 Generator Loss is : 0.0000



---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-108-f26c2579edb5> in <module>()
     26     with tf.Session() as sess:
     27         train_Began(sess, batch_size, celeba_dataset.get_batches, input_size, image_size, gamma, lamda, learning_rate, mm, filter_number,
---> 28               celeba_dataset.shape, celeba_dataset.image_mode, embedding, niter, niter_snapshot, flag, Max_to_keep)

<ipython-input-107-3abc5f8e05c8> in train_Began(sess_, batch_size, get_batches, input_size, image_size, gamma, lamda, lr, mm, filter_number, data_shape, data_image_mode, embedding, niter, niter_snapshot, flag, Max_to_keep)
     58 
     59             #train the model
---> 60             _, loss_g, d_real_loss, d_fake_loss = sess_.run(g_opt, feed_dict=feed_dict)
     61             _, loss_d, summary = sess_.run(d_opt, feed_dict=feed_dict)
     62 

c:\users\administrator\anaconda3\envs\intro-to-rnns\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
    776     try:
    777       result = self._run(None, fetches, feed_dict, options_ptr,
--> 778                          run_metadata_ptr)
    779       if run_metadata:
    780         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

c:\users\administrator\anaconda3\envs\intro-to-rnns\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    980     if final_fetches or final_targets:
    981       results = self._do_run(handle, final_targets, final_fetches,
--> 982                              feed_dict_string, options, run_metadata)
    983     else:
    984       results = []

c:\users\administrator\anaconda3\envs\intro-to-rnns\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1030     if handle is None:
   1031       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1032                            target_list, options, run_metadata)
   1033     else:
   1034       return self._do_call(_prun_fn, self._session, handle, feed_dict,

c:\users\administrator\anaconda3\envs\intro-to-rnns\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
   1037   def _do_call(self, fn, *args):
   1038     try:
-> 1039       return fn(*args)
   1040     except errors.OpError as e:
   1041       message = compat.as_text(e.message)

c:\users\administrator\anaconda3\envs\intro-to-rnns\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1019         return tf_session.TF_Run(session, options,
   1020                                  feed_dict, fetch_list, target_list,
-> 1021                                  status, run_metadata)
   1022 
   1023     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

In [ ]: