Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".


In [1]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)


Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.


In [2]:
show_n_images = 4

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')


Out[2]:
<matplotlib.image.AxesImage at 0x7f9e14cecc50>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.


In [3]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))


Out[3]:
<matplotlib.image.AxesImage at 0x7f9e14c138d0>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)


In [5]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    
    # Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
    real_input_images = tf.placeholder(tf.float32, [None,image_width,image_height,image_channels], name = 'input_real')
    
    # Z input placeholder with rank 2 using z_dim.
    z_input = tf.placeholder(tf.float32, [None,z_dim], name = 'z_input')
    
    # Learning rate placeholder with rank 0.
    learning_rate = tf.placeholder(tf.float32, name = 'learning_rate')
    return real_input_images, z_input, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)


Tests Passed

In [6]:
def leaky_relu(x, alpha=0.2, name='leaky_relu'): 
    return tf.maximum(x, alpha * x, name=name)

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).


In [7]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param images: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    alpha = 0.2
    
    # TODO: Implement Function
    with tf.variable_scope('discriminator', reuse = reuse):
        
        # Layer1
        layer1 = tf.layers.conv2d(images, 64, 5, 
                                  strides=2,
                                  kernel_initializer = tf.contrib.layers.xavier_initializer(), 
                                  padding = 'same')
        lrelu1 = leaky_relu(layer1, alpha)
   
        
        # Layer2
        layer2 = tf.layers.conv2d(lrelu1, 128, 5, 
                                  strides = 2, 
                                  kernel_initializer = tf.contrib.layers.xavier_initializer(),
                                  padding = 'same')
        batnor2 = tf.layers.batch_normalization(layer2, training = True)
        lrelu2 = leaky_relu(batnor2, alpha)
        lrelu2 = tf.layers.dropout(lrelu2, 0.4)
        
        # Layer3
        layer3 = tf.layers.conv2d(lrelu2, 256, 5, 
                                  strides = 2, 
                                  kernel_initializer = tf.contrib.layers.xavier_initializer(),
                                  padding = 'same')
        batnor3 = tf.layers.batch_normalization(layer3, training = True)
        lrelu3 = leaky_relu(batnor3, alpha)
        lrelu3 = tf.layers.dropout(lrelu3, 0.4)
            
        # Flatten it
        flat = tf.reshape(lrelu3, (-1,4*4*256))
        logits = tf.layers.dense(flat, 1)
        output = tf.sigmoid(logits)
        
    return output, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)


Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.


In [8]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    alpha = 0.2
    reuse = not is_train
    with tf.variable_scope('generator', reuse = reuse):
    
        # layer1 : fully connected layer  7*7*256
        layer1 = tf.layers.dense(z, 7*7*256)
        reshape1 = tf.reshape(layer1, (-1,7,7,256))
        batnor1 = tf.layers.batch_normalization(reshape1, training=is_train)
        lrelu1 = leaky_relu(batnor1, alpha)
        lrelu1 = tf.layers.dropout(lrelu1, 0.2, training=is_train)
        
        # layer2 14*14*128
        layer2 = tf.layers.conv2d_transpose(lrelu1, 128, 5, 
                                            strides=2, 
                                            kernel_initializer = tf.contrib.layers.xavier_initializer(),
                                            padding='same')
        batnor2 = tf.layers.batch_normalization(layer2, training = is_train)
        lrelu2 = leaky_relu(batnor2, alpha)
        lrelu2 = tf.layers.dropout(lrelu2, 0.2, training=is_train)
        
        # layer3 
        layer3 = tf.layers.conv2d_transpose(lrelu2, 64, 5, 
                                            strides=2, 
                                            kernel_initializer = tf.contrib.layers.xavier_initializer(), 
                                            padding='same')
        batnor3 = tf.layers.batch_normalization(layer3, training = is_train)
        lrelu3 = leaky_relu(batnor3, alpha)
        
        logits = tf.layers.conv2d_transpose(lrelu3, out_channel_dim,5, 
                                            strides=1, 
                                            kernel_initializer = tf.contrib.layers.xavier_initializer(), 
                                            padding='same')
        
        output = tf.tanh(logits)
    return output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)


Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)

In [9]:
import numpy as np
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    smooth = 0.1
    
    # Implement model_loss to build the GANs for training
    g_model = generator(input_z, out_channel_dim, is_train = True)
    d_model_real, d_logits_real = discriminator(input_real, reuse = False)
    
    # 
    d_model_fake, d_logits_fake = discriminator(g_model, reuse = True)
            
    #calculate the loss
    
    d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, 
                                                                         labels=tf.ones_like(d_logits_real)*(1-smooth)))
    d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, 
                                                                         labels=tf.zeros_like(d_logits_fake)+ np.random.uniform(0, .1)))
    
    discriminator_loss =  d_loss_real + d_loss_fake
    
    generator_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, 
                                                                    labels=tf.ones_like(d_logits_fake)))
    return discriminator_loss, generator_loss


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)


Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).


In [10]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function

    t_vars = tf.trainable_variables()
    
    d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
    g_vars = [var for var in t_vars if var.name.startswith('generator')]
    
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        disc_train_op = tf.train.AdamOptimizer(learning_rate, 
                                            beta1=beta1).minimize(d_loss, 
                                                                  var_list=d_vars)
        gene_train_op = tf.train.AdamOptimizer(learning_rate, 
                                            beta1=beta1).minimize(g_loss, 
                                                                  var_list=g_vars)
    return disc_train_op, gene_train_op


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)


Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.


In [11]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.


In [12]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    
    # model_inputs(image_width, image_height, image_channels, z_dim)
    real_input_images, z_input, learn_r = model_inputs(image_width = data_shape[1], 
                                                             image_height = data_shape[2] , 
                                                             image_channels = len(data_image_mode), 
                                                             z_dim = z_dim)
    # model_loss(input_real, input_z, out_channel_dim)
    d_loss,g_loss = model_loss(input_real = real_input_images, 
                               input_z = z_input, 
                               out_channel_dim = len(data_image_mode))
    
    # model_opt(d_loss, g_loss, learning_rate, beta1)
    d_opt, g_opt = model_opt(d_loss , g_loss, learn_r, beta1)
    
    turn = 0 
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            for batch_images in get_batches(batch_size):
                
                # TODO: Train Model
                turn += 1 
                
                     
                # Sample random noise for G
                batch_z = np.random.uniform(-1, 1 , size=[batch_size, z_dim])
                
                # values of the MNIST and CelebA dataset are in the range of -0.5 to 0.5 : 
                batch_images *=2
                
                # Run optimizers
                _ = sess.run([d_opt, g_opt], 
                             feed_dict={real_input_images: batch_images, z_input: batch_z, learn_r:learning_rate})
                _ = sess.run([d_opt, g_opt], 
                             feed_dict={real_input_images: batch_images, z_input: batch_z, learn_r:learning_rate})
                # Run a third time the generator
                # if turn % 2 == 0:
                _ = sess.run(g_opt, 
                             feed_dict={real_input_images: batch_images, z_input: batch_z, learn_r:learning_rate})
                if turn % 10 == 0:
                    print(" Turn : {} ".format(turn), end="")
                
                if turn % 50 == 0:
                    d_train_loss = d_loss.eval({real_input_images: batch_images, z_input: batch_z})
                    g_train_loss = g_loss.eval({real_input_images: batch_images, z_input: batch_z})
                    print()
                    print("Turn : {} - ".format(turn),
                        "Epoch {}/{} - ".format(epoch_i+1, epoch_count),
                         "Discriminator loss : {:.4f} - ".format(d_train_loss),
                         "Generator loss : {:.4f}. ".format(g_train_loss))
                if turn % 100 == 0:
                    show_generator_output(sess, 25, z_input, len(data_image_mode), data_image_mode)

In [ ]:

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.


In [72]:
batch_size = 64
z_dim = 100
learning_rate = 0.0005
beta1 = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)


 Turn : 10  Turn : 20  Turn : 30  Turn : 40  Turn : 50  Turn : 60  Turn : 70  Turn : 80  Turn : 90  Turn : 100 Turn : 100 -  Epoch 1/2 -  Discriminator loss : 0.9597 -  Generator loss : 1.0181. 
 Turn : 110  Turn : 120  Turn : 130  Turn : 140  Turn : 150  Turn : 160  Turn : 170  Turn : 180  Turn : 190  Turn : 200 Turn : 200 -  Epoch 1/2 -  Discriminator loss : 0.9555 -  Generator loss : 1.4061. 
 Turn : 210  Turn : 220  Turn : 230  Turn : 240  Turn : 250  Turn : 260  Turn : 270  Turn : 280  Turn : 290  Turn : 300 Turn : 300 -  Epoch 1/2 -  Discriminator loss : 0.7939 -  Generator loss : 1.6397. 
 Turn : 310  Turn : 320  Turn : 330  Turn : 340  Turn : 350  Turn : 360  Turn : 370  Turn : 380  Turn : 390  Turn : 400 Turn : 400 -  Epoch 1/2 -  Discriminator loss : 1.9273 -  Generator loss : 0.3955. 
 Turn : 410  Turn : 420  Turn : 430  Turn : 440  Turn : 450  Turn : 460  Turn : 470  Turn : 480  Turn : 490  Turn : 500 Turn : 500 -  Epoch 1/2 -  Discriminator loss : 1.4528 -  Generator loss : 0.5123. 
 Turn : 510  Turn : 520  Turn : 530  Turn : 540  Turn : 550  Turn : 560  Turn : 570  Turn : 580  Turn : 590  Turn : 600 Turn : 600 -  Epoch 1/2 -  Discriminator loss : 0.9338 -  Generator loss : 0.9904. 
 Turn : 610  Turn : 620  Turn : 630  Turn : 640  Turn : 650  Turn : 660  Turn : 670  Turn : 680  Turn : 690  Turn : 700 Turn : 700 -  Epoch 1/2 -  Discriminator loss : 0.7391 -  Generator loss : 1.8869. 
 Turn : 710  Turn : 720  Turn : 730  Turn : 740  Turn : 750  Turn : 760  Turn : 770  Turn : 780  Turn : 790  Turn : 800 Turn : 800 -  Epoch 1/2 -  Discriminator loss : 1.0676 -  Generator loss : 0.7465. 
 Turn : 810  Turn : 820  Turn : 830  Turn : 840  Turn : 850  Turn : 860  Turn : 870  Turn : 880  Turn : 890  Turn : 900 Turn : 900 -  Epoch 1/2 -  Discriminator loss : 0.7953 -  Generator loss : 1.3070. 
 Turn : 910  Turn : 920  Turn : 930  Turn : 940  Turn : 950  Turn : 960  Turn : 970  Turn : 980  Turn : 990  Turn : 1000 Turn : 1000 -  Epoch 2/2 -  Discriminator loss : 0.8565 -  Generator loss : 1.1283. 
 Turn : 1010  Turn : 1020  Turn : 1030  Turn : 1040  Turn : 1050  Turn : 1060  Turn : 1070  Turn : 1080  Turn : 1090  Turn : 1100 Turn : 1100 -  Epoch 2/2 -  Discriminator loss : 0.7857 -  Generator loss : 1.2458. 
 Turn : 1110  Turn : 1120  Turn : 1130  Turn : 1140  Turn : 1150  Turn : 1160  Turn : 1170  Turn : 1180  Turn : 1190  Turn : 1200 Turn : 1200 -  Epoch 2/2 -  Discriminator loss : 0.7148 -  Generator loss : 1.3835. 
 Turn : 1210  Turn : 1220  Turn : 1230  Turn : 1240  Turn : 1250  Turn : 1260  Turn : 1270  Turn : 1280  Turn : 1290  Turn : 1300 Turn : 1300 -  Epoch 2/2 -  Discriminator loss : 0.6493 -  Generator loss : 1.5980. 
 Turn : 1310  Turn : 1320  Turn : 1330  Turn : 1340  Turn : 1350  Turn : 1360  Turn : 1370  Turn : 1380  Turn : 1390  Turn : 1400 Turn : 1400 -  Epoch 2/2 -  Discriminator loss : 0.6419 -  Generator loss : 1.9215. 
 Turn : 1410  Turn : 1420  Turn : 1430  Turn : 1440  Turn : 1450  Turn : 1460  Turn : 1470  Turn : 1480  Turn : 1490  Turn : 1500 Turn : 1500 -  Epoch 2/2 -  Discriminator loss : 0.7323 -  Generator loss : 2.0511. 
 Turn : 1510  Turn : 1520  Turn : 1530  Turn : 1540  Turn : 1550  Turn : 1560  Turn : 1570  Turn : 1580  Turn : 1590  Turn : 1600 Turn : 1600 -  Epoch 2/2 -  Discriminator loss : 0.8254 -  Generator loss : 1.0715. 
 Turn : 1610  Turn : 1620  Turn : 1630  Turn : 1640  Turn : 1650  Turn : 1660  Turn : 1670  Turn : 1680  Turn : 1690  Turn : 1700 Turn : 1700 -  Epoch 2/2 -  Discriminator loss : 0.7043 -  Generator loss : 1.3314. 
 Turn : 1710  Turn : 1720  Turn : 1730  Turn : 1740  Turn : 1750  Turn : 1760  Turn : 1770  Turn : 1780  Turn : 1790  Turn : 1800 Turn : 1800 -  Epoch 2/2 -  Discriminator loss : 1.0740 -  Generator loss : 0.8489. 
 Turn : 1810  Turn : 1820  Turn : 1830  Turn : 1840  Turn : 1850  Turn : 1860  Turn : 1870 

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.


In [13]:
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)


 Turn : 10  Turn : 20  Turn : 30  Turn : 40  Turn : 50 
Turn : 50 -  Epoch 1/1 -  Discriminator loss : 0.9605 -  Generator loss : 1.1761. 
 Turn : 60  Turn : 70  Turn : 80  Turn : 90  Turn : 100 
Turn : 100 -  Epoch 1/1 -  Discriminator loss : 1.8290 -  Generator loss : 0.4682. 
 Turn : 110  Turn : 120  Turn : 130  Turn : 140  Turn : 150 
Turn : 150 -  Epoch 1/1 -  Discriminator loss : 1.7895 -  Generator loss : 1.1344. 
 Turn : 160  Turn : 170  Turn : 180  Turn : 190  Turn : 200 
Turn : 200 -  Epoch 1/1 -  Discriminator loss : 1.5795 -  Generator loss : 0.6558. 
 Turn : 210  Turn : 220  Turn : 230  Turn : 240  Turn : 250 
Turn : 250 -  Epoch 1/1 -  Discriminator loss : 1.4087 -  Generator loss : 0.6146. 
 Turn : 260  Turn : 270  Turn : 280  Turn : 290  Turn : 300 
Turn : 300 -  Epoch 1/1 -  Discriminator loss : 1.5997 -  Generator loss : 0.5525. 
 Turn : 310  Turn : 320  Turn : 330  Turn : 340  Turn : 350 
Turn : 350 -  Epoch 1/1 -  Discriminator loss : 1.4829 -  Generator loss : 0.6715. 
 Turn : 360  Turn : 370  Turn : 380  Turn : 390  Turn : 400 
Turn : 400 -  Epoch 1/1 -  Discriminator loss : 1.7554 -  Generator loss : 0.4188. 
 Turn : 410  Turn : 420  Turn : 430  Turn : 440  Turn : 450 
Turn : 450 -  Epoch 1/1 -  Discriminator loss : 1.5629 -  Generator loss : 0.7810. 
 Turn : 460  Turn : 470  Turn : 480  Turn : 490  Turn : 500 
Turn : 500 -  Epoch 1/1 -  Discriminator loss : 1.6224 -  Generator loss : 0.4242. 
 Turn : 510  Turn : 520  Turn : 530  Turn : 540  Turn : 550 
Turn : 550 -  Epoch 1/1 -  Discriminator loss : 1.4443 -  Generator loss : 0.6433. 
 Turn : 560  Turn : 570  Turn : 580  Turn : 590  Turn : 600 
Turn : 600 -  Epoch 1/1 -  Discriminator loss : 1.3959 -  Generator loss : 0.7299. 
 Turn : 610  Turn : 620  Turn : 630  Turn : 640  Turn : 650 
Turn : 650 -  Epoch 1/1 -  Discriminator loss : 1.5348 -  Generator loss : 0.6500. 
 Turn : 660  Turn : 670  Turn : 680  Turn : 690  Turn : 700 
Turn : 700 -  Epoch 1/1 -  Discriminator loss : 1.4358 -  Generator loss : 0.6561. 
 Turn : 710  Turn : 720  Turn : 730  Turn : 740  Turn : 750 
Turn : 750 -  Epoch 1/1 -  Discriminator loss : 1.3114 -  Generator loss : 0.6755. 
 Turn : 760  Turn : 770  Turn : 780  Turn : 790  Turn : 800 
Turn : 800 -  Epoch 1/1 -  Discriminator loss : 1.4422 -  Generator loss : 0.7386. 
 Turn : 810  Turn : 820  Turn : 830  Turn : 840  Turn : 850 
Turn : 850 -  Epoch 1/1 -  Discriminator loss : 1.4010 -  Generator loss : 0.5985. 
 Turn : 860  Turn : 870  Turn : 880  Turn : 890  Turn : 900 
Turn : 900 -  Epoch 1/1 -  Discriminator loss : 1.4113 -  Generator loss : 0.6278. 
 Turn : 910  Turn : 920  Turn : 930  Turn : 940  Turn : 950 
Turn : 950 -  Epoch 1/1 -  Discriminator loss : 1.5147 -  Generator loss : 0.6379. 
 Turn : 960  Turn : 970  Turn : 980  Turn : 990  Turn : 1000 
Turn : 1000 -  Epoch 1/1 -  Discriminator loss : 1.5005 -  Generator loss : 0.7537. 
 Turn : 1010  Turn : 1020  Turn : 1030  Turn : 1040  Turn : 1050 
Turn : 1050 -  Epoch 1/1 -  Discriminator loss : 1.1976 -  Generator loss : 0.8023. 
 Turn : 1060  Turn : 1070  Turn : 1080  Turn : 1090  Turn : 1100 
Turn : 1100 -  Epoch 1/1 -  Discriminator loss : 1.4593 -  Generator loss : 0.6621. 
 Turn : 1110  Turn : 1120  Turn : 1130  Turn : 1140  Turn : 1150 
Turn : 1150 -  Epoch 1/1 -  Discriminator loss : 1.5681 -  Generator loss : 0.5872. 
 Turn : 1160  Turn : 1170  Turn : 1180  Turn : 1190  Turn : 1200 
Turn : 1200 -  Epoch 1/1 -  Discriminator loss : 1.3887 -  Generator loss : 0.9124. 
 Turn : 1210  Turn : 1220  Turn : 1230  Turn : 1240  Turn : 1250 
Turn : 1250 -  Epoch 1/1 -  Discriminator loss : 1.4057 -  Generator loss : 0.5988. 
 Turn : 1260  Turn : 1270  Turn : 1280  Turn : 1290  Turn : 1300 
Turn : 1300 -  Epoch 1/1 -  Discriminator loss : 1.4806 -  Generator loss : 0.6900. 
 Turn : 1310  Turn : 1320  Turn : 1330  Turn : 1340  Turn : 1350 
Turn : 1350 -  Epoch 1/1 -  Discriminator loss : 1.4000 -  Generator loss : 0.8052. 
 Turn : 1360  Turn : 1370  Turn : 1380  Turn : 1390  Turn : 1400 
Turn : 1400 -  Epoch 1/1 -  Discriminator loss : 1.4720 -  Generator loss : 0.7753. 
 Turn : 1410  Turn : 1420  Turn : 1430  Turn : 1440  Turn : 1450 
Turn : 1450 -  Epoch 1/1 -  Discriminator loss : 1.4255 -  Generator loss : 0.7056. 
 Turn : 1460  Turn : 1470  Turn : 1480  Turn : 1490  Turn : 1500 
Turn : 1500 -  Epoch 1/1 -  Discriminator loss : 1.4137 -  Generator loss : 0.7260. 
 Turn : 1510  Turn : 1520  Turn : 1530  Turn : 1540  Turn : 1550 
Turn : 1550 -  Epoch 1/1 -  Discriminator loss : 1.5150 -  Generator loss : 0.6112. 
 Turn : 1560  Turn : 1570  Turn : 1580  Turn : 1590  Turn : 1600 
Turn : 1600 -  Epoch 1/1 -  Discriminator loss : 1.3442 -  Generator loss : 0.8159. 
 Turn : 1610  Turn : 1620  Turn : 1630  Turn : 1640  Turn : 1650 
Turn : 1650 -  Epoch 1/1 -  Discriminator loss : 1.4910 -  Generator loss : 0.5827. 
 Turn : 1660  Turn : 1670  Turn : 1680  Turn : 1690  Turn : 1700 
Turn : 1700 -  Epoch 1/1 -  Discriminator loss : 1.3319 -  Generator loss : 0.8081. 
 Turn : 1710  Turn : 1720  Turn : 1730  Turn : 1740  Turn : 1750 
Turn : 1750 -  Epoch 1/1 -  Discriminator loss : 1.4243 -  Generator loss : 0.8604. 
 Turn : 1760  Turn : 1770  Turn : 1780  Turn : 1790  Turn : 1800 
Turn : 1800 -  Epoch 1/1 -  Discriminator loss : 1.4163 -  Generator loss : 0.5957. 
 Turn : 1810  Turn : 1820  Turn : 1830  Turn : 1840  Turn : 1850 
Turn : 1850 -  Epoch 1/1 -  Discriminator loss : 1.5566 -  Generator loss : 0.4788. 
 Turn : 1860  Turn : 1870  Turn : 1880  Turn : 1890  Turn : 1900 
Turn : 1900 -  Epoch 1/1 -  Discriminator loss : 1.4076 -  Generator loss : 0.9678. 
 Turn : 1910  Turn : 1920  Turn : 1930  Turn : 1940  Turn : 1950 
Turn : 1950 -  Epoch 1/1 -  Discriminator loss : 1.4378 -  Generator loss : 0.5665. 
 Turn : 1960  Turn : 1970  Turn : 1980  Turn : 1990  Turn : 2000 
Turn : 2000 -  Epoch 1/1 -  Discriminator loss : 1.3168 -  Generator loss : 0.7606. 
 Turn : 2010  Turn : 2020  Turn : 2030  Turn : 2040  Turn : 2050 
Turn : 2050 -  Epoch 1/1 -  Discriminator loss : 1.4316 -  Generator loss : 0.8112. 
 Turn : 2060  Turn : 2070  Turn : 2080  Turn : 2090  Turn : 2100 
Turn : 2100 -  Epoch 1/1 -  Discriminator loss : 1.4106 -  Generator loss : 0.7130. 
 Turn : 2110  Turn : 2120  Turn : 2130  Turn : 2140  Turn : 2150 
Turn : 2150 -  Epoch 1/1 -  Discriminator loss : 1.4066 -  Generator loss : 0.7513. 
 Turn : 2160  Turn : 2170  Turn : 2180  Turn : 2190  Turn : 2200 
Turn : 2200 -  Epoch 1/1 -  Discriminator loss : 1.4877 -  Generator loss : 0.6360. 
 Turn : 2210  Turn : 2220  Turn : 2230  Turn : 2240  Turn : 2250 
Turn : 2250 -  Epoch 1/1 -  Discriminator loss : 1.4133 -  Generator loss : 0.6812. 
 Turn : 2260  Turn : 2270  Turn : 2280  Turn : 2290  Turn : 2300 
Turn : 2300 -  Epoch 1/1 -  Discriminator loss : 1.3682 -  Generator loss : 0.7901. 
 Turn : 2310  Turn : 2320  Turn : 2330  Turn : 2340  Turn : 2350 
Turn : 2350 -  Epoch 1/1 -  Discriminator loss : 1.5114 -  Generator loss : 0.6980. 
 Turn : 2360  Turn : 2370  Turn : 2380  Turn : 2390  Turn : 2400 
Turn : 2400 -  Epoch 1/1 -  Discriminator loss : 1.4280 -  Generator loss : 0.6381. 
 Turn : 2410  Turn : 2420  Turn : 2430  Turn : 2440  Turn : 2450 
Turn : 2450 -  Epoch 1/1 -  Discriminator loss : 1.3296 -  Generator loss : 0.7232. 
 Turn : 2460  Turn : 2470  Turn : 2480  Turn : 2490  Turn : 2500 
Turn : 2500 -  Epoch 1/1 -  Discriminator loss : 1.4226 -  Generator loss : 0.7402. 
 Turn : 2510  Turn : 2520  Turn : 2530  Turn : 2540  Turn : 2550 
Turn : 2550 -  Epoch 1/1 -  Discriminator loss : 1.3841 -  Generator loss : 0.7646. 
 Turn : 2560  Turn : 2570  Turn : 2580  Turn : 2590  Turn : 2600 
Turn : 2600 -  Epoch 1/1 -  Discriminator loss : 1.4488 -  Generator loss : 0.7268. 
 Turn : 2610  Turn : 2620  Turn : 2630  Turn : 2640  Turn : 2650 
Turn : 2650 -  Epoch 1/1 -  Discriminator loss : 1.3262 -  Generator loss : 0.8498. 
 Turn : 2660  Turn : 2670  Turn : 2680  Turn : 2690  Turn : 2700 
Turn : 2700 -  Epoch 1/1 -  Discriminator loss : 1.4413 -  Generator loss : 0.6110. 
 Turn : 2710  Turn : 2720  Turn : 2730  Turn : 2740  Turn : 2750 
Turn : 2750 -  Epoch 1/1 -  Discriminator loss : 1.3598 -  Generator loss : 0.8140. 
 Turn : 2760  Turn : 2770  Turn : 2780  Turn : 2790  Turn : 2800 
Turn : 2800 -  Epoch 1/1 -  Discriminator loss : 1.3867 -  Generator loss : 0.5397. 
 Turn : 2810  Turn : 2820  Turn : 2830  Turn : 2840  Turn : 2850 
Turn : 2850 -  Epoch 1/1 -  Discriminator loss : 1.6142 -  Generator loss : 0.5703. 
 Turn : 2860  Turn : 2870  Turn : 2880  Turn : 2890  Turn : 2900 
Turn : 2900 -  Epoch 1/1 -  Discriminator loss : 1.3206 -  Generator loss : 0.7561. 
 Turn : 2910  Turn : 2920  Turn : 2930  Turn : 2940  Turn : 2950 
Turn : 2950 -  Epoch 1/1 -  Discriminator loss : 1.4383 -  Generator loss : 0.6062. 
 Turn : 2960  Turn : 2970  Turn : 2980  Turn : 2990  Turn : 3000 
Turn : 3000 -  Epoch 1/1 -  Discriminator loss : 1.3712 -  Generator loss : 0.7587. 
 Turn : 3010  Turn : 3020  Turn : 3030  Turn : 3040  Turn : 3050 
Turn : 3050 -  Epoch 1/1 -  Discriminator loss : 1.3620 -  Generator loss : 0.7183. 
 Turn : 3060  Turn : 3070  Turn : 3080  Turn : 3090  Turn : 3100 
Turn : 3100 -  Epoch 1/1 -  Discriminator loss : 1.4163 -  Generator loss : 0.6248. 
 Turn : 3110  Turn : 3120  Turn : 3130  Turn : 3140  Turn : 3150 
Turn : 3150 -  Epoch 1/1 -  Discriminator loss : 1.4380 -  Generator loss : 0.7100. 
 Turn : 3160 

In [ ]:
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.1


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)


 Turn : 10  Turn : 20  Turn : 30  Turn : 40  Turn : 50 
Turn : 50 -  Epoch 1/1 -  Discriminator loss : 1.0472 -  Generator loss : 1.2087. 
 Turn : 60  Turn : 70  Turn : 80  Turn : 90  Turn : 100 
Turn : 100 -  Epoch 1/1 -  Discriminator loss : 1.4365 -  Generator loss : 0.6559. 
 Turn : 110  Turn : 120  Turn : 130  Turn : 140  Turn : 150 
Turn : 150 -  Epoch 1/1 -  Discriminator loss : 1.2775 -  Generator loss : 1.3142. 
 Turn : 160  Turn : 170  Turn : 180  Turn : 190  Turn : 200 
Turn : 200 -  Epoch 1/1 -  Discriminator loss : 1.3663 -  Generator loss : 0.6704. 
 Turn : 210  Turn : 220  Turn : 230  Turn : 240  Turn : 250 
Turn : 250 -  Epoch 1/1 -  Discriminator loss : 1.3779 -  Generator loss : 0.7987. 
 Turn : 260  Turn : 270  Turn : 280  Turn : 290  Turn : 300 
Turn : 300 -  Epoch 1/1 -  Discriminator loss : 1.7215 -  Generator loss : 0.3165. 
 Turn : 310  Turn : 320  Turn : 330  Turn : 340  Turn : 350 
Turn : 350 -  Epoch 1/1 -  Discriminator loss : 1.8282 -  Generator loss : 0.3192. 
 Turn : 360  Turn : 370  Turn : 380  Turn : 390  Turn : 400 
Turn : 400 -  Epoch 1/1 -  Discriminator loss : 1.5807 -  Generator loss : 0.6943. 
 Turn : 410  Turn : 420  Turn : 430  Turn : 440  Turn : 450 
Turn : 450 -  Epoch 1/1 -  Discriminator loss : 1.3853 -  Generator loss : 0.5320. 
 Turn : 460  Turn : 470  Turn : 480  Turn : 490  Turn : 500 
Turn : 500 -  Epoch 1/1 -  Discriminator loss : 1.4691 -  Generator loss : 0.5517. 
 Turn : 510  Turn : 520  Turn : 530  Turn : 540  Turn : 550 
Turn : 550 -  Epoch 1/1 -  Discriminator loss : 1.3484 -  Generator loss : 0.7018. 
 Turn : 560  Turn : 570  Turn : 580  Turn : 590  Turn : 600 
Turn : 600 -  Epoch 1/1 -  Discriminator loss : 1.2692 -  Generator loss : 0.6548. 
 Turn : 610  Turn : 620  Turn : 630  Turn : 640  Turn : 650 
Turn : 650 -  Epoch 1/1 -  Discriminator loss : 1.4293 -  Generator loss : 0.7749. 
 Turn : 660  Turn : 670  Turn : 680  Turn : 690  Turn : 700 
Turn : 700 -  Epoch 1/1 -  Discriminator loss : 1.4554 -  Generator loss : 0.7081. 
 Turn : 710  Turn : 720  Turn : 730  Turn : 740  Turn : 750 
Turn : 750 -  Epoch 1/1 -  Discriminator loss : 1.3689 -  Generator loss : 0.7426. 
 Turn : 760  Turn : 770  Turn : 780  Turn : 790  Turn : 800 
Turn : 800 -  Epoch 1/1 -  Discriminator loss : 1.3861 -  Generator loss : 0.9056. 
 Turn : 810  Turn : 820  Turn : 830  Turn : 840  Turn : 850 
Turn : 850 -  Epoch 1/1 -  Discriminator loss : 1.4092 -  Generator loss : 0.8189. 
 Turn : 860  Turn : 870  Turn : 880  Turn : 890  Turn : 900 
Turn : 900 -  Epoch 1/1 -  Discriminator loss : 1.3590 -  Generator loss : 0.7949. 
 Turn : 910  Turn : 920  Turn : 930  Turn : 940  Turn : 950 
Turn : 950 -  Epoch 1/1 -  Discriminator loss : 1.8208 -  Generator loss : 0.3059. 
 Turn : 960  Turn : 970  Turn : 980  Turn : 990  Turn : 1000 
Turn : 1000 -  Epoch 1/1 -  Discriminator loss : 1.4047 -  Generator loss : 0.8648. 
 Turn : 1010  Turn : 1020  Turn : 1030  Turn : 1040  Turn : 1050 
Turn : 1050 -  Epoch 1/1 -  Discriminator loss : 1.4042 -  Generator loss : 0.8373. 
 Turn : 1060  Turn : 1070  Turn : 1080  Turn : 1090  Turn : 1100 
Turn : 1100 -  Epoch 1/1 -  Discriminator loss : 1.3873 -  Generator loss : 0.8127. 
 Turn : 1110  Turn : 1120  Turn : 1130  Turn : 1140  Turn : 1150 
Turn : 1150 -  Epoch 1/1 -  Discriminator loss : 1.5374 -  Generator loss : 0.5657. 
 Turn : 1160  Turn : 1170  Turn : 1180  Turn : 1190  Turn : 1200 
Turn : 1200 -  Epoch 1/1 -  Discriminator loss : 1.3716 -  Generator loss : 0.9091. 
 Turn : 1210  Turn : 1220  Turn : 1230  Turn : 1240  Turn : 1250 
Turn : 1250 -  Epoch 1/1 -  Discriminator loss : 1.3570 -  Generator loss : 0.6926. 
 Turn : 1260  Turn : 1270  Turn : 1280  Turn : 1290  Turn : 1300 
Turn : 1300 -  Epoch 1/1 -  Discriminator loss : 1.4037 -  Generator loss : 0.6987. 
 Turn : 1310  Turn : 1320  Turn : 1330  Turn : 1340  Turn : 1350 
Turn : 1350 -  Epoch 1/1 -  Discriminator loss : 1.3977 -  Generator loss : 0.8673. 
 Turn : 1360  Turn : 1370  Turn : 1380  Turn : 1390  Turn : 1400 
Turn : 1400 -  Epoch 1/1 -  Discriminator loss : 1.3965 -  Generator loss : 0.7986. 
 Turn : 1410  Turn : 1420  Turn : 1430  Turn : 1440  Turn : 1450 
Turn : 1450 -  Epoch 1/1 -  Discriminator loss : 1.3660 -  Generator loss : 0.7810. 
 Turn : 1460  Turn : 1470  Turn : 1480  Turn : 1490  Turn : 1500 
Turn : 1500 -  Epoch 1/1 -  Discriminator loss : 2.0819 -  Generator loss : 0.2043. 
 Turn : 1510  Turn : 1520  Turn : 1530  Turn : 1540  Turn : 1550 
Turn : 1550 -  Epoch 1/1 -  Discriminator loss : 1.5125 -  Generator loss : 0.5966. 
 Turn : 1560  Turn : 1570  Turn : 1580  Turn : 1590  Turn : 1600 
Turn : 1600 -  Epoch 1/1 -  Discriminator loss : 1.3954 -  Generator loss : 0.8114. 
 Turn : 1610  Turn : 1620  Turn : 1630  Turn : 1640  Turn : 1650 
Turn : 1650 -  Epoch 1/1 -  Discriminator loss : 1.5198 -  Generator loss : 0.5376. 
 Turn : 1660  Turn : 1670  Turn : 1680  Turn : 1690  Turn : 1700 
Turn : 1700 -  Epoch 1/1 -  Discriminator loss : 1.4391 -  Generator loss : 0.8569. 
 Turn : 1710  Turn : 1720  Turn : 1730  Turn : 1740  Turn : 1750 
Turn : 1750 -  Epoch 1/1 -  Discriminator loss : 1.4568 -  Generator loss : 0.7321. 
 Turn : 1760  Turn : 1770  Turn : 1780  Turn : 1790  Turn : 1800 
Turn : 1800 -  Epoch 1/1 -  Discriminator loss : 1.3029 -  Generator loss : 0.6134. 
 Turn : 1810  Turn : 1820  Turn : 1830  Turn : 1840  Turn : 1850 
Turn : 1850 -  Epoch 1/1 -  Discriminator loss : 1.1937 -  Generator loss : 0.7541. 
 Turn : 1860  Turn : 1870  Turn : 1880  Turn : 1890  Turn : 1900 
Turn : 1900 -  Epoch 1/1 -  Discriminator loss : 1.3514 -  Generator loss : 0.8971. 
 Turn : 1910  Turn : 1920  Turn : 1930  Turn : 1940  Turn : 1950 
Turn : 1950 -  Epoch 1/1 -  Discriminator loss : 1.4072 -  Generator loss : 0.6823. 
 Turn : 1960  Turn : 1970  Turn : 1980  Turn : 1990  Turn : 2000 
Turn : 2000 -  Epoch 1/1 -  Discriminator loss : 1.3560 -  Generator loss : 0.7776. 
 Turn : 2010  Turn : 2020  Turn : 2030  Turn : 2040  Turn : 2050 
Turn : 2050 -  Epoch 1/1 -  Discriminator loss : 1.6346 -  Generator loss : 0.7790. 
 Turn : 2060  Turn : 2070  Turn : 2080  Turn : 2090  Turn : 2100 
Turn : 2100 -  Epoch 1/1 -  Discriminator loss : 1.3900 -  Generator loss : 0.8587. 
 Turn : 2110  Turn : 2120  Turn : 2130  Turn : 2140  Turn : 2150 
Turn : 2150 -  Epoch 1/1 -  Discriminator loss : 1.3706 -  Generator loss : 0.6248. 
 Turn : 2160  Turn : 2170  Turn : 2180  Turn : 2190  Turn : 2200 
Turn : 2200 -  Epoch 1/1 -  Discriminator loss : 1.3342 -  Generator loss : 0.8631. 
 Turn : 2210  Turn : 2220  Turn : 2230  Turn : 2240  Turn : 2250 
Turn : 2250 -  Epoch 1/1 -  Discriminator loss : 1.4523 -  Generator loss : 0.6056. 
 Turn : 2260  Turn : 2270  Turn : 2280  Turn : 2290  Turn : 2300 
Turn : 2300 -  Epoch 1/1 -  Discriminator loss : 1.3190 -  Generator loss : 0.6159. 
 Turn : 2310  Turn : 2320  Turn : 2330  Turn : 2340  Turn : 2350 
Turn : 2350 -  Epoch 1/1 -  Discriminator loss : 1.3945 -  Generator loss : 0.5732. 
 Turn : 2360  Turn : 2370  Turn : 2380  Turn : 2390  Turn : 2400 
Turn : 2400 -  Epoch 1/1 -  Discriminator loss : 1.3905 -  Generator loss : 0.8255. 
 Turn : 2410  Turn : 2420  Turn : 2430  Turn : 2440  Turn : 2450 
Turn : 2450 -  Epoch 1/1 -  Discriminator loss : 1.3959 -  Generator loss : 0.7971. 
 Turn : 2460  Turn : 2470  Turn : 2480  Turn : 2490  Turn : 2500 
Turn : 2500 -  Epoch 1/1 -  Discriminator loss : 1.3498 -  Generator loss : 0.8131. 
 Turn : 2510  Turn : 2520  Turn : 2530  Turn : 2540  Turn : 2550 
Turn : 2550 -  Epoch 1/1 -  Discriminator loss : 1.4450 -  Generator loss : 0.8044. 
 Turn : 2560  Turn : 2570  Turn : 2580  Turn : 2590  Turn : 2600 
Turn : 2600 -  Epoch 1/1 -  Discriminator loss : 1.3488 -  Generator loss : 0.7981. 
 Turn : 2610  Turn : 2620  Turn : 2630  Turn : 2640  Turn : 2650 
Turn : 2650 -  Epoch 1/1 -  Discriminator loss : 1.3513 -  Generator loss : 0.7892. 
 Turn : 2660  Turn : 2670  Turn : 2680  Turn : 2690  Turn : 2700 
Turn : 2700 -  Epoch 1/1 -  Discriminator loss : 1.2790 -  Generator loss : 0.6914. 
 Turn : 2710  Turn : 2720  Turn : 2730  Turn : 2740  Turn : 2750 
Turn : 2750 -  Epoch 1/1 -  Discriminator loss : 1.3914 -  Generator loss : 0.8152. 
 Turn : 2760  Turn : 2770  Turn : 2780  Turn : 2790  Turn : 2800 
Turn : 2800 -  Epoch 1/1 -  Discriminator loss : 1.3495 -  Generator loss : 0.6537. 
 Turn : 2810  Turn : 2820  Turn : 2830  Turn : 2840  Turn : 2850 
Turn : 2850 -  Epoch 1/1 -  Discriminator loss : 1.3396 -  Generator loss : 0.7616. 
 Turn : 2860  Turn : 2870  Turn : 2880  Turn : 2890  Turn : 2900 
Turn : 2900 -  Epoch 1/1 -  Discriminator loss : 1.2965 -  Generator loss : 0.7294. 
 Turn : 2910  Turn : 2920  Turn : 2930  Turn : 2940  Turn : 2950 
Turn : 2950 -  Epoch 1/1 -  Discriminator loss : 1.3268 -  Generator loss : 0.7226. 
 Turn : 2960  Turn : 2970  Turn : 2980  Turn : 2990  Turn : 3000 
Turn : 3000 -  Epoch 1/1 -  Discriminator loss : 1.2749 -  Generator loss : 0.8450. 
 Turn : 3010  Turn : 3020  Turn : 3030  Turn : 3040  Turn : 3050 
Turn : 3050 -  Epoch 1/1 -  Discriminator loss : 1.2841 -  Generator loss : 0.7768. 
 Turn : 3060  Turn : 3070  Turn : 3080  Turn : 3090  Turn : 3100 
Turn : 3100 -  Epoch 1/1 -  Discriminator loss : 1.2533 -  Generator loss : 0.6841. 
 Turn : 3110  Turn : 3120  Turn : 3130  Turn : 3140  Turn : 3150 
Turn : 3150 -  Epoch 1/1 -  Discriminator loss : 1.3642 -  Generator loss : 0.8597. 
 Turn : 3160 

In [ ]:
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.3


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)


 Turn : 10  Turn : 20  Turn : 30  Turn : 40  Turn : 50 
Turn : 50 -  Epoch 1/1 -  Discriminator loss : 1.8161 -  Generator loss : 0.3014. 
 Turn : 60  Turn : 70  Turn : 80  Turn : 90  Turn : 100 
Turn : 100 -  Epoch 1/1 -  Discriminator loss : 1.4600 -  Generator loss : 1.3809. 
 Turn : 110  Turn : 120  Turn : 130  Turn : 140  Turn : 150 
Turn : 150 -  Epoch 1/1 -  Discriminator loss : 1.1896 -  Generator loss : 1.0396. 
 Turn : 160  Turn : 170  Turn : 180  Turn : 190  Turn : 200 
Turn : 200 -  Epoch 1/1 -  Discriminator loss : 1.4252 -  Generator loss : 1.0987. 
 Turn : 210  Turn : 220  Turn : 230  Turn : 240  Turn : 250 
Turn : 250 -  Epoch 1/1 -  Discriminator loss : 1.4160 -  Generator loss : 0.6985. 
 Turn : 260  Turn : 270  Turn : 280  Turn : 290  Turn : 300 
Turn : 300 -  Epoch 1/1 -  Discriminator loss : 1.4403 -  Generator loss : 0.9366. 
 Turn : 310  Turn : 320  Turn : 330  Turn : 340  Turn : 350 
Turn : 350 -  Epoch 1/1 -  Discriminator loss : 1.4462 -  Generator loss : 0.7690. 
 Turn : 360  Turn : 370  Turn : 380  Turn : 390  Turn : 400 
Turn : 400 -  Epoch 1/1 -  Discriminator loss : 1.6611 -  Generator loss : 0.6499. 
 Turn : 410  Turn : 420  Turn : 430  Turn : 440  Turn : 450 
Turn : 450 -  Epoch 1/1 -  Discriminator loss : 1.4661 -  Generator loss : 0.6522. 
 Turn : 460  Turn : 470  Turn : 480  Turn : 490  Turn : 500 
Turn : 500 -  Epoch 1/1 -  Discriminator loss : 1.5596 -  Generator loss : 0.6080. 
 Turn : 510  Turn : 520  Turn : 530  Turn : 540  Turn : 550 
Turn : 550 -  Epoch 1/1 -  Discriminator loss : 1.4529 -  Generator loss : 0.7435. 
 Turn : 560  Turn : 570  Turn : 580  Turn : 590  Turn : 600 
Turn : 600 -  Epoch 1/1 -  Discriminator loss : 1.4147 -  Generator loss : 0.7475. 
 Turn : 610  Turn : 620  Turn : 630  Turn : 640  Turn : 650 
Turn : 650 -  Epoch 1/1 -  Discriminator loss : 1.4311 -  Generator loss : 0.8079. 
 Turn : 660  Turn : 670  Turn : 680  Turn : 690  Turn : 700 
Turn : 700 -  Epoch 1/1 -  Discriminator loss : 1.4108 -  Generator loss : 0.7685. 
 Turn : 710  Turn : 720  Turn : 730  Turn : 740  Turn : 750 
Turn : 750 -  Epoch 1/1 -  Discriminator loss : 1.3968 -  Generator loss : 0.8224. 
 Turn : 760  Turn : 770  Turn : 780  Turn : 790  Turn : 800 
Turn : 800 -  Epoch 1/1 -  Discriminator loss : 1.4127 -  Generator loss : 0.7488. 
 Turn : 810  Turn : 820  Turn : 830  Turn : 840  Turn : 850 
Turn : 850 -  Epoch 1/1 -  Discriminator loss : 1.4661 -  Generator loss : 0.8316. 
 Turn : 860  Turn : 870  Turn : 880  Turn : 890  Turn : 900 
Turn : 900 -  Epoch 1/1 -  Discriminator loss : 1.4412 -  Generator loss : 0.6080. 
 Turn : 910  Turn : 920  Turn : 930  Turn : 940  Turn : 950 
Turn : 950 -  Epoch 1/1 -  Discriminator loss : 1.4526 -  Generator loss : 0.5198. 
 Turn : 960  Turn : 970  Turn : 980  Turn : 990  Turn : 1000 
Turn : 1000 -  Epoch 1/1 -  Discriminator loss : 1.4385 -  Generator loss : 0.8671. 
 Turn : 1010  Turn : 1020  Turn : 1030  Turn : 1040  Turn : 1050 
Turn : 1050 -  Epoch 1/1 -  Discriminator loss : 1.4238 -  Generator loss : 0.9317. 
 Turn : 1060  Turn : 1070  Turn : 1080  Turn : 1090  Turn : 1100 
Turn : 1100 -  Epoch 1/1 -  Discriminator loss : 1.4828 -  Generator loss : 0.6258. 
 Turn : 1110  Turn : 1120  Turn : 1130  Turn : 1140  Turn : 1150 
Turn : 1150 -  Epoch 1/1 -  Discriminator loss : 1.4390 -  Generator loss : 0.9371. 
 Turn : 1160  Turn : 1170  Turn : 1180  Turn : 1190  Turn : 1200 
Turn : 1200 -  Epoch 1/1 -  Discriminator loss : 1.4095 -  Generator loss : 0.7317. 
 Turn : 1210  Turn : 1220  Turn : 1230  Turn : 1240  Turn : 1250 
Turn : 1250 -  Epoch 1/1 -  Discriminator loss : 1.3312 -  Generator loss : 0.8575. 
 Turn : 1260 

In [ ]:


In [ ]:


In [ ]:

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.