Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".


In [46]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)


Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.


In [47]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')


Out[47]:
<matplotlib.image.AxesImage at 0x1e893dad4a8>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.


In [48]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))


Out[48]:
<matplotlib.image.AxesImage at 0x1e886a7ef60>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU


In [30]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))


TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)


In [31]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    real_input = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
    z_input = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
    learning_rate = tf.placeholder(tf.float32, name='learning_rate')
    

    return real_input, z_input, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)


Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).


In [80]:
def discriminator(images, reuse=False, alpha=0.2):
    """
    Create the discriminator network
    :param images: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    with tf.variable_scope('discriminator', reuse=reuse):
        # first convolution is 14x14x56
        out1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out1 = tf.maximum(out1 * alpha, out1)
        
        # second convolution is 7x7x112
        out2 = tf.layers.conv2d(out1, 128, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out2 = tf.layers.batch_normalization(out2, training=True)
        out2 = tf.maximum(out2 * alpha, out2)
        
        # third convolution is also 4x4x256
        out3 = tf.layers.conv2d(out2, 256, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out3 = tf.layers.batch_normalization(out3, training=True)
        out3 = tf.maximum(out3 * alpha, out3)
        
        flat = tf.reshape(out3, (-1, 4*4*256))
        
        
        logits = tf.layers.dense(flat, 1)
        out = tf.sigmoid(logits)
        #print(out2.shape)

        return out, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)


Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.


In [92]:
def generator(z, out_channel_dim, is_train=True, alpha=0.2):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    with tf.variable_scope('generator', reuse=not is_train):
        #print(z.shape)
        # 4x4x256
        out1 = tf.layers.dense(z, 7*7*256)
        out1 = tf.reshape(out1, (-1, 7, 7, 256))
        out1 = tf.layers.batch_normalization(out1, training=is_train)
        out1 = tf.maximum(out1 * alpha, out1)
        #out1 = tf.nn.dropout(out1, 0.5)
        
        # Deconvolution 12x12x112
        out2 = tf.layers.conv2d_transpose(out1, 128, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out2 = tf.layers.batch_normalization(out2, training=is_train)
        out2 = tf.maximum(out2 * alpha, out2)
        #out2 = tf.nn.dropout(out2, 0.5)
        
        # Deconvolution 24x24x56
        out3 = tf.layers.conv2d_transpose(out2, 64, 5, strides=2, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        out3 = tf.layers.batch_normalization(out3, training=is_train)
        out3 = tf.maximum(out3 * alpha, out3)
        #out3 = tf.nn.dropout(out3, 0.5)
        
        # Output layer, 28x28x5
        logits = tf.layers.conv2d_transpose(out3, out_channel_dim, 5, strides=1, padding='same', kernel_initializer=tf.random_normal_initializer(stddev=0.05))
        
        
        #print(logits.shape)
        
        out = tf.tanh(logits)
        
        return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)


Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)

In [82]:
def model_loss(input_real, input_z, out_channel_dim, alpha=0.2, smooth=0.1):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    g_model = generator(input_z, out_channel_dim, alpha=alpha)
    d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
    d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
    
    d_loss_real = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real) * (1-smooth)))
    d_loss_fake = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
    g_loss = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))

    d_loss = d_loss_real + d_loss_fake

    return d_loss, g_loss


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)


Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).


In [83]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    tf_vars = tf.trainable_variables()
    dis_vars = [var for var in tf_vars if var.name.startswith('discriminator')]
    gen_vars = [var for var in tf_vars if var.name.startswith('generator')]
    
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        dis_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=dis_vars)
        gen_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=gen_vars)
    
    return dis_train_opt, gen_train_opt


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)


Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.


In [84]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.


In [87]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    input_real, input_z, learning_rate_tf = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
    d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
    d_train_opt, g_train_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
    
    current_evl = 0
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            print("Current Epoch {}...".format(epoch_i+1))
            for batch_images in get_batches(batch_size):
                current_evl += 1
                # TODO: Train Model
                batch_z = np.random.uniform(-1, 1, size=[batch_size, z_dim])
                
                batch_images *= 2
                
                # Running optimizers
                _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z:batch_z})
                _ = sess.run(g_train_opt, feed_dict={input_z:batch_z, input_real: batch_images})
                _ = sess.run(g_train_opt, feed_dict={input_z: batch_z, input_real: batch_images})
                
                if current_evl%5 == 0:
                    train_loss_d = d_loss.eval({input_z:batch_z, input_real: batch_images})
                    train_loss_g = g_loss.eval({input_z:batch_z})
                    print("Current Epoch {}/{} \n".format(epoch_i+1, epoch_count),
                          "Discriminator Loss is : {:.4f} \n".format(train_loss_d),
                          "Generator Loss is : {:.4f}".format(train_loss_g))
                    if current_evl%25 == 0:
                        show_generator_output(sess, 25, input_z, data_shape[3], data_image_mode)
                    print("\n\n")

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.


In [90]:
batch_size = 128
z_dim = 256
learning_rate = 0.0001
beta1 = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

#mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)


Current Epoch 1...
Current Epoch 1/2 
 Discriminator Loss is : 0.8838 
 Generator Loss is : 1.1515



Current Epoch 1/2 
 Discriminator Loss is : 0.5947 
 Generator Loss is : 1.7733



Current Epoch 1/2 
 Discriminator Loss is : 0.6234 
 Generator Loss is : 1.7531



Current Epoch 1/2 
 Discriminator Loss is : 0.5632 
 Generator Loss is : 2.0472



Current Epoch 1/2 
 Discriminator Loss is : 0.6089 
 Generator Loss is : 1.8528


Current Epoch 1/2 
 Discriminator Loss is : 0.6890 
 Generator Loss is : 1.7135



Current Epoch 1/2 
 Discriminator Loss is : 0.7984 
 Generator Loss is : 1.6617



Current Epoch 1/2 
 Discriminator Loss is : 0.7715 
 Generator Loss is : 1.5823



Current Epoch 1/2 
 Discriminator Loss is : 0.8004 
 Generator Loss is : 1.4205



Current Epoch 1/2 
 Discriminator Loss is : 0.7876 
 Generator Loss is : 1.4180


Current Epoch 1/2 
 Discriminator Loss is : 0.6305 
 Generator Loss is : 1.9816



Current Epoch 1/2 
 Discriminator Loss is : 0.8217 
 Generator Loss is : 1.3229



Current Epoch 1/2 
 Discriminator Loss is : 0.5790 
 Generator Loss is : 2.2231



Current Epoch 1/2 
 Discriminator Loss is : 0.5701 
 Generator Loss is : 2.2108



Current Epoch 1/2 
 Discriminator Loss is : 0.5825 
 Generator Loss is : 1.9723


Current Epoch 1/2 
 Discriminator Loss is : 0.5268 
 Generator Loss is : 2.4254



Current Epoch 1/2 
 Discriminator Loss is : 0.5198 
 Generator Loss is : 2.4020



Current Epoch 1/2 
 Discriminator Loss is : 0.5911 
 Generator Loss is : 2.3119



Current Epoch 1/2 
 Discriminator Loss is : 0.5507 
 Generator Loss is : 2.0705



Current Epoch 1/2 
 Discriminator Loss is : 0.5566 
 Generator Loss is : 2.3581


Current Epoch 1/2 
 Discriminator Loss is : 0.6049 
 Generator Loss is : 1.8410



Current Epoch 1/2 
 Discriminator Loss is : 0.6522 
 Generator Loss is : 2.1543



Current Epoch 1/2 
 Discriminator Loss is : 0.5808 
 Generator Loss is : 2.3821



Current Epoch 1/2 
 Discriminator Loss is : 0.6190 
 Generator Loss is : 2.4912



Current Epoch 1/2 
 Discriminator Loss is : 0.6010 
 Generator Loss is : 2.4119


Current Epoch 1/2 
 Discriminator Loss is : 0.6558 
 Generator Loss is : 1.7942



Current Epoch 1/2 
 Discriminator Loss is : 0.7104 
 Generator Loss is : 1.4166



Current Epoch 1/2 
 Discriminator Loss is : 0.7218 
 Generator Loss is : 1.3981



Current Epoch 1/2 
 Discriminator Loss is : 0.6018 
 Generator Loss is : 2.0667



Current Epoch 1/2 
 Discriminator Loss is : 0.5831 
 Generator Loss is : 2.1990


Current Epoch 1/2 
 Discriminator Loss is : 0.6090 
 Generator Loss is : 1.9189



Current Epoch 1/2 
 Discriminator Loss is : 0.6711 
 Generator Loss is : 1.7178



Current Epoch 1/2 
 Discriminator Loss is : 0.5689 
 Generator Loss is : 2.0694



Current Epoch 1/2 
 Discriminator Loss is : 0.5498 
 Generator Loss is : 2.2915



Current Epoch 1/2 
 Discriminator Loss is : 0.5913 
 Generator Loss is : 2.0090


Current Epoch 1/2 
 Discriminator Loss is : 0.6274 
 Generator Loss is : 1.6421



Current Epoch 1/2 
 Discriminator Loss is : 0.5851 
 Generator Loss is : 2.0729



Current Epoch 1/2 
 Discriminator Loss is : 0.5490 
 Generator Loss is : 2.7646



Current Epoch 1/2 
 Discriminator Loss is : 0.7308 
 Generator Loss is : 2.7695



Current Epoch 1/2 
 Discriminator Loss is : 0.5416 
 Generator Loss is : 1.9976


Current Epoch 1/2 
 Discriminator Loss is : 0.5598 
 Generator Loss is : 1.9342



Current Epoch 1/2 
 Discriminator Loss is : 0.5433 
 Generator Loss is : 2.0003



Current Epoch 1/2 
 Discriminator Loss is : 0.8141 
 Generator Loss is : 1.2325



Current Epoch 1/2 
 Discriminator Loss is : 0.5396 
 Generator Loss is : 2.0332



Current Epoch 1/2 
 Discriminator Loss is : 0.5306 
 Generator Loss is : 2.4981


Current Epoch 1/2 
 Discriminator Loss is : 0.5376 
 Generator Loss is : 2.0273



Current Epoch 1/2 
 Discriminator Loss is : 0.5217 
 Generator Loss is : 2.3296



Current Epoch 1/2 
 Discriminator Loss is : 0.5168 
 Generator Loss is : 3.0018



Current Epoch 1/2 
 Discriminator Loss is : 0.6879 
 Generator Loss is : 1.3958



Current Epoch 1/2 
 Discriminator Loss is : 0.5218 
 Generator Loss is : 2.8060


Current Epoch 1/2 
 Discriminator Loss is : 0.5842 
 Generator Loss is : 1.9054



Current Epoch 1/2 
 Discriminator Loss is : 0.5619 
 Generator Loss is : 2.4161



Current Epoch 1/2 
 Discriminator Loss is : 0.7812 
 Generator Loss is : 1.1171



Current Epoch 1/2 
 Discriminator Loss is : 0.5153 
 Generator Loss is : 2.3442



Current Epoch 1/2 
 Discriminator Loss is : 0.5493 
 Generator Loss is : 2.0686


Current Epoch 1/2 
 Discriminator Loss is : 0.5985 
 Generator Loss is : 2.1853



Current Epoch 1/2 
 Discriminator Loss is : 0.5834 
 Generator Loss is : 2.3623



Current Epoch 1/2 
 Discriminator Loss is : 0.5463 
 Generator Loss is : 2.2503



Current Epoch 1/2 
 Discriminator Loss is : 0.5660 
 Generator Loss is : 2.2172



Current Epoch 1/2 
 Discriminator Loss is : 0.5798 
 Generator Loss is : 2.5697


Current Epoch 1/2 
 Discriminator Loss is : 0.9801 
 Generator Loss is : 0.8320



Current Epoch 1/2 
 Discriminator Loss is : 0.6914 
 Generator Loss is : 1.6559



Current Epoch 1/2 
 Discriminator Loss is : 0.8742 
 Generator Loss is : 1.0245



Current Epoch 1/2 
 Discriminator Loss is : 0.6341 
 Generator Loss is : 1.8579



Current Epoch 1/2 
 Discriminator Loss is : 1.0390 
 Generator Loss is : 2.8223


Current Epoch 1/2 
 Discriminator Loss is : 0.7931 
 Generator Loss is : 1.6768



Current Epoch 1/2 
 Discriminator Loss is : 0.7792 
 Generator Loss is : 1.5181



Current Epoch 1/2 
 Discriminator Loss is : 0.8110 
 Generator Loss is : 1.7590



Current Epoch 1/2 
 Discriminator Loss is : 0.6901 
 Generator Loss is : 1.9109



Current Epoch 1/2 
 Discriminator Loss is : 0.7123 
 Generator Loss is : 1.9024


Current Epoch 1/2 
 Discriminator Loss is : 0.8992 
 Generator Loss is : 2.0859



Current Epoch 1/2 
 Discriminator Loss is : 0.6574 
 Generator Loss is : 1.8700



Current Epoch 1/2 
 Discriminator Loss is : 0.7772 
 Generator Loss is : 1.4782



Current Epoch 1/2 
 Discriminator Loss is : 0.6572 
 Generator Loss is : 2.0683



Current Epoch 1/2 
 Discriminator Loss is : 0.9810 
 Generator Loss is : 0.8907


Current Epoch 1/2 
 Discriminator Loss is : 0.7696 
 Generator Loss is : 1.3603



Current Epoch 1/2 
 Discriminator Loss is : 0.8370 
 Generator Loss is : 1.2893



Current Epoch 1/2 
 Discriminator Loss is : 0.7659 
 Generator Loss is : 1.3481



Current Epoch 1/2 
 Discriminator Loss is : 0.7852 
 Generator Loss is : 1.4321



Current Epoch 1/2 
 Discriminator Loss is : 0.6995 
 Generator Loss is : 1.9119


Current Epoch 1/2 
 Discriminator Loss is : 0.6583 
 Generator Loss is : 1.7934



Current Epoch 1/2 
 Discriminator Loss is : 0.9553 
 Generator Loss is : 2.5923



Current Epoch 1/2 
 Discriminator Loss is : 0.7112 
 Generator Loss is : 1.8507



Current Epoch 1/2 
 Discriminator Loss is : 1.3820 
 Generator Loss is : 0.5811



Current Epoch 1/2 
 Discriminator Loss is : 0.6222 
 Generator Loss is : 1.9444


Current Epoch 1/2 
 Discriminator Loss is : 0.9512 
 Generator Loss is : 2.5168



Current Epoch 1/2 
 Discriminator Loss is : 0.9208 
 Generator Loss is : 1.2926



Current Epoch 1/2 
 Discriminator Loss is : 0.6967 
 Generator Loss is : 1.4988



Current Epoch 1/2 
 Discriminator Loss is : 0.8876 
 Generator Loss is : 1.0481



Current Epoch 1/2 
 Discriminator Loss is : 0.7767 
 Generator Loss is : 2.1671


Current Epoch 1/2 
 Discriminator Loss is : 0.7763 
 Generator Loss is : 1.8164



Current Epoch 1/2 
 Discriminator Loss is : 0.6611 
 Generator Loss is : 2.2123



Current Epoch 1/2 
 Discriminator Loss is : 0.7671 
 Generator Loss is : 1.6849



Current Epoch 2...
Current Epoch 2/2 
 Discriminator Loss is : 0.8242 
 Generator Loss is : 1.7637



Current Epoch 2/2 
 Discriminator Loss is : 0.8620 
 Generator Loss is : 1.7471


Current Epoch 2/2 
 Discriminator Loss is : 0.6580 
 Generator Loss is : 2.1451



Current Epoch 2/2 
 Discriminator Loss is : 0.9286 
 Generator Loss is : 1.0091



Current Epoch 2/2 
 Discriminator Loss is : 0.7694 
 Generator Loss is : 1.8731



Current Epoch 2/2 
 Discriminator Loss is : 0.9791 
 Generator Loss is : 1.9980



Current Epoch 2/2 
 Discriminator Loss is : 0.7608 
 Generator Loss is : 1.8083


Current Epoch 2/2 
 Discriminator Loss is : 0.9740 
 Generator Loss is : 2.3271



Current Epoch 2/2 
 Discriminator Loss is : 0.7167 
 Generator Loss is : 1.6943



Current Epoch 2/2 
 Discriminator Loss is : 0.7058 
 Generator Loss is : 1.6402



Current Epoch 2/2 
 Discriminator Loss is : 0.8459 
 Generator Loss is : 1.2381



Current Epoch 2/2 
 Discriminator Loss is : 0.6523 
 Generator Loss is : 1.9193


Current Epoch 2/2 
 Discriminator Loss is : 0.6773 
 Generator Loss is : 1.8750



Current Epoch 2/2 
 Discriminator Loss is : 0.9499 
 Generator Loss is : 2.3204



Current Epoch 2/2 
 Discriminator Loss is : 0.6799 
 Generator Loss is : 1.8597



Current Epoch 2/2 
 Discriminator Loss is : 0.7057 
 Generator Loss is : 1.8527



Current Epoch 2/2 
 Discriminator Loss is : 1.0397 
 Generator Loss is : 2.7849


Current Epoch 2/2 
 Discriminator Loss is : 0.6505 
 Generator Loss is : 2.1017



Current Epoch 2/2 
 Discriminator Loss is : 0.8510 
 Generator Loss is : 1.1938



Current Epoch 2/2 
 Discriminator Loss is : 0.6586 
 Generator Loss is : 2.2802



Current Epoch 2/2 
 Discriminator Loss is : 0.6939 
 Generator Loss is : 1.4607



Current Epoch 2/2 
 Discriminator Loss is : 0.8132 
 Generator Loss is : 1.3107


Current Epoch 2/2 
 Discriminator Loss is : 0.7818 
 Generator Loss is : 1.9531



Current Epoch 2/2 
 Discriminator Loss is : 0.6900 
 Generator Loss is : 1.7846



Current Epoch 2/2 
 Discriminator Loss is : 1.0960 
 Generator Loss is : 0.8011



Current Epoch 2/2 
 Discriminator Loss is : 0.6754 
 Generator Loss is : 2.0493



Current Epoch 2/2 
 Discriminator Loss is : 0.6140 
 Generator Loss is : 2.1469


Current Epoch 2/2 
 Discriminator Loss is : 0.6397 
 Generator Loss is : 1.8683



Current Epoch 2/2 
 Discriminator Loss is : 0.6367 
 Generator Loss is : 1.8908



Current Epoch 2/2 
 Discriminator Loss is : 0.8119 
 Generator Loss is : 2.0417



Current Epoch 2/2 
 Discriminator Loss is : 0.7065 
 Generator Loss is : 1.5814



Current Epoch 2/2 
 Discriminator Loss is : 0.8087 
 Generator Loss is : 1.3156


Current Epoch 2/2 
 Discriminator Loss is : 0.8688 
 Generator Loss is : 1.2332



Current Epoch 2/2 
 Discriminator Loss is : 0.8181 
 Generator Loss is : 1.3443



Current Epoch 2/2 
 Discriminator Loss is : 0.8616 
 Generator Loss is : 1.1499



Current Epoch 2/2 
 Discriminator Loss is : 0.7736 
 Generator Loss is : 1.3444



Current Epoch 2/2 
 Discriminator Loss is : 0.8423 
 Generator Loss is : 1.2114


Current Epoch 2/2 
 Discriminator Loss is : 0.6478 
 Generator Loss is : 1.8136



Current Epoch 2/2 
 Discriminator Loss is : 1.0253 
 Generator Loss is : 0.8790



Current Epoch 2/2 
 Discriminator Loss is : 0.8667 
 Generator Loss is : 1.0814



Current Epoch 2/2 
 Discriminator Loss is : 0.7655 
 Generator Loss is : 1.9819



Current Epoch 2/2 
 Discriminator Loss is : 1.1305 
 Generator Loss is : 0.8082


Current Epoch 2/2 
 Discriminator Loss is : 0.8498 
 Generator Loss is : 1.4938



Current Epoch 2/2 
 Discriminator Loss is : 0.6971 
 Generator Loss is : 2.3341



Current Epoch 2/2 
 Discriminator Loss is : 0.6789 
 Generator Loss is : 1.7451



Current Epoch 2/2 
 Discriminator Loss is : 0.7348 
 Generator Loss is : 1.7844



Current Epoch 2/2 
 Discriminator Loss is : 1.0349 
 Generator Loss is : 1.6598


Current Epoch 2/2 
 Discriminator Loss is : 0.9652 
 Generator Loss is : 2.6253



Current Epoch 2/2 
 Discriminator Loss is : 0.7983 
 Generator Loss is : 1.5236



Current Epoch 2/2 
 Discriminator Loss is : 0.7326 
 Generator Loss is : 1.5851



Current Epoch 2/2 
 Discriminator Loss is : 0.6631 
 Generator Loss is : 2.0303



Current Epoch 2/2 
 Discriminator Loss is : 0.8494 
 Generator Loss is : 1.8338