Generative Adversarial Networks (GANs)

So far in CS231N, all the applications of neural networks that we have explored have been discriminative models that take an input and are trained to produce a labeled output. This has ranged from straightforward classification of image categories to sentence generation (which was still phrased as a classification problem, our labels were in vocabulary space and we’d learned a recurrence to capture multi-word labels). In this notebook, we will expand our repetoire, and build generative models using neural networks. Specifically, we will learn how to build models which generate novel images that resemble a set of training images.

What is a GAN?

In 2014, Goodfellow et al. presented a method for training generative models called Generative Adversarial Networks (GANs for short). In a GAN, we build two different neural networks. Our first network is a traditional classification network, called the discriminator. We will train the discriminator to take images, and classify them as being real (belonging to the training set) or fake (not present in the training set). Our other network, called the generator, will take random noise as input and transform it using a neural network to produce images. The goal of the generator is to fool the discriminator into thinking the images it produced are real.

We can think of this back and forth process of the generator ($G$) trying to fool the discriminator ($D$), and the discriminator trying to correctly classify real vs. fake as a minimax game: $$\underset{G}{\text{minimize}}\; \underset{D}{\text{maximize}}\; \mathbb{E}_{x \sim p_\text{data}}\left[\log D(x)\right] + \mathbb{E}_{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$ where $x \sim p_\text{data}$ are samples from the input data, $z \sim p(z)$ are the random noise samples, $G(z)$ are the generated images using the neural network generator $G$, and $D$ is the output of the discriminator, specifying the probability of an input being real. In Goodfellow et al., they analyze this minimax game and show how it relates to minimizing the Jensen-Shannon divergence between the training data distribution and the generated samples from $G$.

To optimize this minimax game, we will aternate between taking gradient descent steps on the objective for $G$, and gradient ascent steps on the objective for $D$:

  1. update the generator ($G$) to minimize the probability of the discriminator making the correct choice.
  2. update the discriminator ($D$) to maximize the probability of the discriminator making the correct choice.

While these updates are useful for analysis, they do not perform well in practice. Instead, we will use a different objective when we update the generator: maximize the probability of the discriminator making the incorrect choice. This small change helps to allevaiate problems with the generator gradient vanishing when the discriminator is confident. This is the standard update used in most GAN papers, and was used in the original paper from Goodfellow et al..

In this assignment, we will alternate the following updates:

  1. Update the generator ($G$) to maximize the probability of the discriminator making the incorrect choice on generated data: $$\underset{G}{\text{maximize}}\; \mathbb{E}_{z \sim p(z)}\left[\log D(G(z))\right]$$
  2. Update the discriminator ($D$), to maximize the probability of the discriminator making the correct choice on real and generated data: $$\underset{D}{\text{maximize}}\; \mathbb{E}_{x \sim p_\text{data}}\left[\log D(x)\right] + \mathbb{E}_{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$

What else is there?

Since 2014, GANs have exploded into a huge research area, with massive workshops, and hundreds of new papers. Compared to other approaches for generative models, they often produce the highest quality samples but are some of the most difficult and finicky models to train (see this github repo that contains a set of 17 hacks that are useful for getting models working). Improving the stabiilty and robustness of GAN training is an open research question, with new papers coming out every day! For a more recent tutorial on GANs, see here. There is also some even more recent exciting work that changes the objective function to Wasserstein distance and yields much more stable results across model architectures: WGAN, WGAN-GP.

GANs are not the only way to train a generative model! For other approaches to generative modeling check out the deep generative model chapter of the Deep Learning book. Another popular way of training neural networks as generative models is Variational Autoencoders (co-discovered here and here). Variational autoencoders combine neural networks with variational inference to train deep generative models. These models tend to be far more stable and easier to train but currently don't produce samples that are as pretty as GANs.

Example pictures of what you should expect (yours might look slightly different):

Setup


In [1]:
from __future__ import print_function, division
import tensorflow as tf
import numpy as np

import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec

%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# A bunch of utility functions

def show_images(images):
    images = np.reshape(images, [images.shape[0], -1])  # images reshape to (batch_size, D)
    sqrtn = int(np.ceil(np.sqrt(images.shape[0])))
    sqrtimg = int(np.ceil(np.sqrt(images.shape[1])))

    fig = plt.figure(figsize=(sqrtn, sqrtn))
    gs = gridspec.GridSpec(sqrtn, sqrtn)
    gs.update(wspace=0.05, hspace=0.05)

    for i, img in enumerate(images):
        ax = plt.subplot(gs[i])
        plt.axis('off')
        ax.set_xticklabels([])
        ax.set_yticklabels([])
        ax.set_aspect('equal')
        plt.imshow(img.reshape([sqrtimg,sqrtimg]))
    return

def preprocess_img(x):
    return 2 * x - 1.0

def deprocess_img(x):
    return (x + 1.0) / 2.0

def rel_error(x,y):
    return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))

def count_params():
    """Count the number of parameters in the current TensorFlow graph """
    param_count = np.sum([np.prod(x.get_shape().as_list()) for x in tf.global_variables()])
    return param_count


def get_session():
    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    session = tf.Session(config=config)
    return session

answers = np.load('gan-checks-tf.npz')


/home/azarichkovyi/Projects/DataScience/CS231n/assignment2/env/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
  return f(*args, **kwds)

Dataset

GANs are notoriously finicky with hyperparameters, and also require many training epochs. In order to make this assignment approachable without a GPU, we will be working on the MNIST dataset, which is 60,000 training and 10,000 test images. Each picture contains a centered image of white digit on black background (0 through 9). This was one of the first datasets used to train convolutional neural networks and it is fairly easy -- a standard CNN model can easily exceed 99% accuracy.

To simplify our code here, we will use the TensorFlow MNIST wrapper, which downloads and loads the MNIST dataset. See the documentation for more information about the interface. The default parameters will take 5,000 of the training examples and place them into a validation dataset. The data will be saved into a folder called MNIST_data.

Heads-up: The TensorFlow MNIST wrapper returns images as vectors. That is, they're size (batch, 784). If you want to treat them as images, we have to resize them to (batch,28,28) or (batch,28,28,1). They are also type np.float32 and bounded [0,1].


In [2]:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('./cs231n/datasets/MNIST_data', one_hot=False)

# show a batch
show_images(mnist.train.next_batch(16)[0])


Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting ./cs231n/datasets/MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting ./cs231n/datasets/MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting ./cs231n/datasets/MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting ./cs231n/datasets/MNIST_data/t10k-labels-idx1-ubyte.gz

LeakyReLU

In the cell below, you should implement a LeakyReLU. See the class notes (where alpha is small number) or equation (3) in this paper. LeakyReLUs keep ReLU units from dying and are often used in GAN methods (as are maxout units, however those increase model size and therefore are not used in this notebook).

HINT: You should be able to use tf.maximum


In [5]:
def leaky_relu(x, alpha=0.01):
    """Compute the leaky ReLU activation function.
    
    Inputs:
    - x: TensorFlow Tensor with arbitrary shape
    - alpha: leak parameter for leaky ReLU
    
    Returns:
    TensorFlow Tensor with the same shape as x
    """
    
    x_leak = x*alpha
    y = tf.maximum(x, x_leak)
    
    return y

Test your leaky ReLU implementation. You should get errors < 1e-10


In [6]:
def test_leaky_relu(x, y_true):
    tf.reset_default_graph()
    with get_session() as sess:
        y_tf = leaky_relu(tf.constant(x))
        y = sess.run(y_tf)
        print('Maximum error: %g'%rel_error(y_true, y))

test_leaky_relu(answers['lrelu_x'], answers['lrelu_y'])


Maximum error: 0

Random Noise

Generate a TensorFlow Tensor containing uniform noise from -1 to 1 with shape [batch_size, dim].


In [7]:
def sample_noise(batch_size, dim):
    """Generate random uniform noise from -1 to 1.
    
    Inputs:
    - batch_size: integer giving the batch size of noise to generate
    - dim: integer giving the dimension of the the noise to generate
    
    Returns:
    TensorFlow Tensor containing uniform noise in [-1, 1] with shape [batch_size, dim]
    """
    
    noise = tf.random_uniform([batch_size, dim], minval=-1, maxval=1)
    return noise

Make sure noise is the correct shape and type:


In [8]:
def test_sample_noise():
    batch_size = 3
    dim = 4
    tf.reset_default_graph()
    with get_session() as sess:
        z = sample_noise(batch_size, dim)
        # Check z has the correct shape
        assert z.get_shape().as_list() == [batch_size, dim]
        # Make sure z is a Tensor and not a numpy array
        assert isinstance(z, tf.Tensor)
        # Check that we get different noise for different evaluations
        z1 = sess.run(z)
        z2 = sess.run(z)
        assert not np.array_equal(z1, z2)
        # Check that we get the correct range
        assert np.all(z1 >= -1.0) and np.all(z1 <= 1.0)
        print("All tests passed!")
    
test_sample_noise()


All tests passed!

Discriminator

Our first step is to build a discriminator. You should use the layers in tf.layers to build the model. All fully connected layers should include bias terms.

Architecture:

  • Fully connected layer from size 784 to 256
  • LeakyReLU with alpha 0.01
  • Fully connected layer from 256 to 256
  • LeakyReLU with alpha 0.01
  • Fully connected layer from 256 to 1

The output of the discriminator should have shape [batch_size, 1], and contain real numbers corresponding to the scores that each of the batch_size inputs is a real image.


In [9]:
def discriminator(x):
    """Compute discriminator score for a batch of input images.
    
    Inputs:
    - x: TensorFlow Tensor of flattened input images, shape [batch_size, 784]
    
    Returns:
    TensorFlow Tensor with shape [batch_size, 1], containing the score 
    for an image being real for each input image.
    """
    with tf.variable_scope("discriminator"):
        
        init = tf.contrib.layers.xavier_initializer(uniform=True)
        x = tf.layers.dense(x, 256, 
                            activation=leaky_relu, 
                            kernel_initializer=init, 
                            name='fc_1')
        
        x = tf.layers.dense(x, 256, 
                            activation=leaky_relu, 
                            kernel_initializer=init, 
                            name='fc_2')
        
        logits = tf.layers.dense(x, 1, 
                                 kernel_initializer=init, 
                                 name='logits')
        
        return logits

Test to make sure the number of parameters in the discriminator is correct:


In [10]:
def test_discriminator(true_count=267009):
    tf.reset_default_graph()
    with get_session() as sess:
        y = discriminator(tf.ones((2, 784)))
        cur_count = count_params()
        if cur_count != true_count:
            print('Incorrect number of parameters in discriminator. {0} instead of {1}. Check your achitecture.'.format(cur_count,true_count))
        else:
            print('Correct number of parameters in discriminator.')
        
test_discriminator()


Correct number of parameters in discriminator.

Generator

Now to build a generator. You should use the layers in tf.layers to construct the model. All fully connected layers should include bias terms.

Architecture:

  • Fully connected layer from tf.shape(z)[1] (the number of noise dimensions) to 1024
  • ReLU
  • Fully connected layer from 1024 to 1024
  • ReLU
  • Fully connected layer from 1024 to 784
  • TanH (To restrict the output to be [-1,1])

In [11]:
def generator(z):
    """Generate images from a random noise vector.
    
    Inputs:
    - z: TensorFlow Tensor of random noise with shape [batch_size, noise_dim]
    
    Returns:
    TensorFlow Tensor of generated images, with shape [batch_size, 784].
    """
    with tf.variable_scope("generator"):
        init = tf.contrib.layers.xavier_initializer(uniform=True)
        
        x = tf.layers.dense(z, 1024, 
                            activation=tf.nn.relu, 
                            kernel_initializer=init, 
                            name='fc_1')
        
        x = tf.layers.dense(x, 1024, 
                            activation=tf.nn.relu, 
                            kernel_initializer=init, 
                            name='fc_2')
        
        img = tf.layers.dense(x, 784, 
                              activation=tf.tanh,
                              kernel_initializer=init, 
                              name='image')
        
        return img

Test to make sure the number of parameters in the generator is correct:


In [12]:
def test_generator(true_count=1858320):
    tf.reset_default_graph()
    with get_session() as sess:
        y = generator(tf.ones((1, 4)))
        cur_count = count_params()
        if cur_count != true_count:
            print('Incorrect number of parameters in generator. {0} instead of {1}. Check your achitecture.'.format(cur_count,true_count))
        else:
            print('Correct number of parameters in generator.')
        
test_generator()


Correct number of parameters in generator.

GAN Loss

Compute the generator and discriminator loss. The generator loss is: $$\ell_G = -\mathbb{E}_{z \sim p(z)}\left[\log D(G(z))\right]$$ and the discriminator loss is: $$ \ell_D = -\mathbb{E}_{x \sim p_\text{data}}\left[\log D(x)\right] - \mathbb{E}_{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$ Note that these are negated from the equations presented earlier as we will be minimizing these losses.

HINTS: Use tf.ones_like and tf.zeros_like to generate labels for your discriminator. Use sigmoid_cross_entropy loss to help compute your loss function. Instead of computing the expectation, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing.


In [13]:
def gan_loss(logits_real, logits_fake):
    """Compute the GAN loss.
    
    Inputs:
    - logits_real: Tensor, shape [batch_size, 1], output of discriminator
        Log probability that the image is real for each real image
    - logits_fake: Tensor, shape[batch_size, 1], output of discriminator
        Log probability that the image is real for each fake image
    
    Returns:
    - D_loss: discriminator loss scalar
    - G_loss: generator loss scalar
    """
    
    real_images_labels = tf.ones_like(logits_real)
    D_real_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=real_images_labels, 
                                                          logits=logits_real)
    D_real_loss = tf.reduce_mean(D_real_loss)
    
    fake_images_labels = tf.zeros_like(logits_fake)
    D_fake_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=fake_images_labels, 
                                                          logits=logits_fake)
    D_fake_loss = tf.reduce_mean(D_fake_loss)
    
    D_loss = D_real_loss + D_fake_loss
    
    generated_images_labels = tf.ones_like(logits_fake)
    G_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=generated_images_labels, 
                                                     logits=logits_fake)
    G_loss = tf.reduce_mean(G_loss)
    
    return D_loss, G_loss

Test your GAN loss. Make sure both the generator and discriminator loss are correct. You should see errors less than 1e-5.


In [14]:
def test_gan_loss(logits_real, logits_fake, d_loss_true, g_loss_true):
    tf.reset_default_graph()
    with get_session() as sess:
        d_loss, g_loss = sess.run(gan_loss(tf.constant(logits_real), tf.constant(logits_fake)))
    print("Maximum error in d_loss: %g"%rel_error(d_loss_true, d_loss))
    print("Maximum error in g_loss: %g"%rel_error(g_loss_true, g_loss))

test_gan_loss(answers['logits_real'], answers['logits_fake'],
              answers['d_loss_true'], answers['g_loss_true'])


Maximum error in d_loss: 0
Maximum error in g_loss: 0

Optimizing our loss

Make an AdamOptimizer with a 1e-3 learning rate, beta1=0.5 to mininize G_loss and D_loss separately. The trick of decreasing beta was shown to be effective in helping GANs converge in the Improved Techniques for Training GANs paper. In fact, with our current hyperparameters, if you set beta1 to the Tensorflow default of 0.9, there's a good chance your discriminator loss will go to zero and the generator will fail to learn entirely. In fact, this is a common failure mode in GANs; if your D(x) learns to be too fast (e.g. loss goes near zero), your G(z) is never able to learn. Often D(x) is trained with SGD with Momentum or RMSProp instead of Adam, but here we'll use Adam for both D(x) and G(z).


In [15]:
# TODO: create an AdamOptimizer for D_solver and G_solver
def get_solvers(learning_rate=1e-3, beta1=0.5):
    """Create solvers for GAN training.
    
    Inputs:
    - learning_rate: learning rate to use for both solvers
    - beta1: beta1 parameter for both solvers (first moment decay)
    
    Returns:
    - D_solver: instance of tf.train.AdamOptimizer with correct learning_rate and beta1
    - G_solver: instance of tf.train.AdamOptimizer with correct learning_rate and beta1
    """
    D_solver = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1)
    G_solver = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=beta1)
    return D_solver, G_solver

Putting it all together

Now just a bit of Lego Construction.. Read this section over carefully to understand how we'll be composing the generator and discriminator


In [16]:
tf.reset_default_graph()

# number of images for each batch
batch_size = 128
# our noise dimension
noise_dim = 96

# placeholder for images from the training dataset
x = tf.placeholder(tf.float32, [None, 784])
# random noise fed into our generator
z = sample_noise(batch_size, noise_dim)
# generated images
G_sample = generator(z)

with tf.variable_scope("") as scope:
    #scale images to be -1 to 1
    logits_real = discriminator(preprocess_img(x))
    # Re-use discriminator weights on new inputs
    scope.reuse_variables()
    logits_fake = discriminator(G_sample)

# Get the list of variables for the discriminator and generator
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'generator') 

# get our solver
D_solver, G_solver = get_solvers()

# get our loss
D_loss, G_loss = gan_loss(logits_real, logits_fake)

# setup training steps
D_train_step = D_solver.minimize(D_loss, var_list=D_vars)
G_train_step = G_solver.minimize(G_loss, var_list=G_vars)
D_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS, 'discriminator')
G_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS, 'generator')

Training a GAN!

Well that wasn't so hard, was it? In the iterations in the low 100s you should see black backgrounds, fuzzy shapes as you approach iteration 1000, and decent shapes, about half of which will be sharp and clearly recognizable as we pass 3000. In our case, we'll simply train D(x) and G(z) with one batch each every iteration. However, papers often experiment with different schedules of training D(x) and G(z), sometimes doing one for more steps than the other, or even training each one until the loss gets "good enough" and then switching to training the other.


In [17]:
# a giant helper function
def run_a_gan(sess, G_train_step, G_loss, D_train_step, D_loss, G_extra_step, D_extra_step,\
              show_every=250, print_every=50, batch_size=128, num_epoch=10):
    """Train a GAN for a certain number of epochs.
    
    Inputs:
    - sess: A tf.Session that we want to use to run our data
    - G_train_step: A training step for the Generator
    - G_loss: Generator loss
    - D_train_step: A training step for the Generator
    - D_loss: Discriminator loss
    - G_extra_step: A collection of tf.GraphKeys.UPDATE_OPS for generator
    - D_extra_step: A collection of tf.GraphKeys.UPDATE_OPS for discriminator
    Returns:
        Nothing
    """
    # compute the number of iterations we need
    max_iter = int(mnist.train.num_examples*num_epoch/batch_size)
    for it in range(max_iter):
        # every show often, show a sample result
        if it % show_every == 0:
            samples = sess.run(G_sample)
            fig = show_images(samples[:16])
            plt.show()
            print()
        # run a batch of data through the network
        minibatch,minbatch_y = mnist.train.next_batch(batch_size)
        _, D_loss_curr = sess.run([D_train_step, D_loss], feed_dict={x: minibatch})
        _, G_loss_curr = sess.run([G_train_step, G_loss])

        # print loss every so often.
        # We want to make sure D_loss doesn't go to 0
        if it % print_every == 0:
            print('Iter: {}, D: {:.4}, G:{:.4}'.format(it,D_loss_curr,G_loss_curr))
    print('Final images')
    samples = sess.run(G_sample)

    fig = show_images(samples[:16])
    plt.show()

Train your GAN! This should take about 10 minutes on a CPU, or less than a minute on GPU.


In [18]:
with get_session() as sess:
    sess.run(tf.global_variables_initializer())
    run_a_gan(sess,G_train_step,G_loss,D_train_step,D_loss,G_extra_step,D_extra_step)


Iter: 0, D: 2.011, G:0.8034
Iter: 50, D: 0.4772, G:1.443
Iter: 100, D: 1.056, G:2.271
Iter: 150, D: 1.701, G:1.798
Iter: 200, D: 1.434, G:1.28
Iter: 250, D: 0.9132, G:1.578
Iter: 300, D: 1.089, G:1.457
Iter: 350, D: 0.7775, G:1.395
Iter: 400, D: 1.22, G:1.354
Iter: 450, D: 0.9734, G:2.297
Iter: 500, D: 1.348, G:1.144
Iter: 550, D: 1.783, G:1.913
Iter: 600, D: 1.372, G:1.095
Iter: 650, D: 1.199, G:1.493
Iter: 700, D: 1.116, G:1.408
Iter: 750, D: 1.034, G:1.27
Iter: 800, D: 1.044, G:1.155
Iter: 850, D: 2.526, G:1.001
Iter: 900, D: 1.239, G:1.027
Iter: 950, D: 1.386, G:2.797
Iter: 1000, D: 1.248, G:1.716
Iter: 1050, D: 1.226, G:0.9788
Iter: 1100, D: 1.118, G:1.05
Iter: 1150, D: 1.182, G:1.129
Iter: 1200, D: 1.177, G:1.005
Iter: 1250, D: 1.4, G:1.069
Iter: 1300, D: 1.126, G:0.9809
Iter: 1350, D: 1.213, G:0.9757
Iter: 1400, D: 1.177, G:0.8938
Iter: 1450, D: 1.12, G:1.096
Iter: 1500, D: 1.255, G:1.015
Iter: 1550, D: 1.148, G:1.305
Iter: 1600, D: 1.317, G:0.9868
Iter: 1650, D: 1.135, G:0.9848
Iter: 1700, D: 1.363, G:0.7999
Iter: 1750, D: 1.212, G:0.9935
Iter: 1800, D: 1.32, G:0.996
Iter: 1850, D: 1.301, G:0.8933
Iter: 1900, D: 1.275, G:1.018
Iter: 1950, D: 1.272, G:0.8759
Iter: 2000, D: 1.19, G:1.019
Iter: 2050, D: 1.269, G:0.8983
Iter: 2100, D: 1.3, G:0.8014
Iter: 2150, D: 1.336, G:0.8333
Iter: 2200, D: 1.28, G:0.8365
Iter: 2250, D: 1.326, G:0.9234
Iter: 2300, D: 1.291, G:0.9975
Iter: 2350, D: 1.319, G:0.8488
Iter: 2400, D: 1.209, G:0.9045
Iter: 2450, D: 1.282, G:0.8477
Iter: 2500, D: 1.418, G:0.8955
Iter: 2550, D: 1.35, G:0.7919
Iter: 2600, D: 1.239, G:0.7445
Iter: 2650, D: 1.226, G:1.003
Iter: 2700, D: 1.278, G:0.8465
Iter: 2750, D: 1.244, G:0.8429
Iter: 2800, D: 1.315, G:0.8931
Iter: 2850, D: 1.327, G:0.8637
Iter: 2900, D: 1.307, G:0.8907
Iter: 2950, D: 1.241, G:0.9056
Iter: 3000, D: 1.303, G:0.8475
Iter: 3050, D: 1.323, G:0.7849
Iter: 3100, D: 1.349, G:0.915
Iter: 3150, D: 1.301, G:0.8949
Iter: 3200, D: 1.319, G:0.8001
Iter: 3250, D: 1.395, G:0.8188
Iter: 3300, D: 1.27, G:0.9754
Iter: 3350, D: 1.288, G:0.9788
Iter: 3400, D: 1.225, G:0.8262
Iter: 3450, D: 1.36, G:0.8834
Iter: 3500, D: 1.247, G:0.8504
Iter: 3550, D: 1.255, G:0.7719
Iter: 3600, D: 1.349, G:0.9737
Iter: 3650, D: 1.326, G:0.8593
Iter: 3700, D: 1.243, G:0.8198
Iter: 3750, D: 1.408, G:0.8123
Iter: 3800, D: 1.329, G:0.7735
Iter: 3850, D: 1.289, G:0.8935
Iter: 3900, D: 1.354, G:0.8257
Iter: 3950, D: 1.315, G:0.8143
Iter: 4000, D: 1.341, G:0.8631
Iter: 4050, D: 1.325, G:0.8642
Iter: 4100, D: 1.279, G:0.8397
Iter: 4150, D: 1.345, G:0.787
Iter: 4200, D: 1.286, G:0.8042
Iter: 4250, D: 1.274, G:0.8901
Final images

Least Squares GAN

We'll now look at Least Squares GAN, a newer, more stable alternative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement equation (9) in the paper, with the generator loss: $$\ell_G = \frac{1}{2}\mathbb{E}_{z \sim p(z)}\left[\left(D(G(z))-1\right)^2\right]$$ and the discriminator loss: $$ \ell_D = \frac{1}{2}\mathbb{E}_{x \sim p_\text{data}}\left[\left(D(x)-1\right)^2\right] + \frac{1}{2}\mathbb{E}_{z \sim p(z)}\left[ \left(D(G(z))\right)^2\right]$$

HINTS: Instead of computing the expectation, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing. When plugging in for $D(x)$ and $D(G(z))$ use the direct output from the discriminator (score_real and score_fake).


In [19]:
def lsgan_loss(score_real, score_fake):
    """Compute the Least Squares GAN loss.
    
    Inputs:
    - score_real: Tensor, shape [batch_size, 1], output of discriminator
        score for each real image
    - score_fake: Tensor, shape[batch_size, 1], output of discriminator
        score for each fake image    
          
    Returns:
    - D_loss: discriminator loss scalar
    - G_loss: generator loss scalar
    """
    # TODO: compute D_loss and G_loss
    D_loss = tf.reduce_mean((score_real - 1) ** 2) + tf.reduce_mean(score_fake ** 2)
    D_loss *= 0.5
    
    G_loss = 0.5 * tf.reduce_mean((score_fake - 1) ** 2)
    return D_loss, G_loss

Test your LSGAN loss. You should see errors less than 1e-7.


In [20]:
def test_lsgan_loss(score_real, score_fake, d_loss_true, g_loss_true):
    with get_session() as sess:
        d_loss, g_loss = sess.run(
            lsgan_loss(tf.constant(score_real), tf.constant(score_fake)))
    print("Maximum error in d_loss: %g"%rel_error(d_loss_true, d_loss))
    print("Maximum error in g_loss: %g"%rel_error(g_loss_true, g_loss))

test_lsgan_loss(answers['logits_real'], answers['logits_fake'],
                answers['d_loss_lsgan_true'], answers['g_loss_lsgan_true'])


Maximum error in d_loss: 0
Maximum error in g_loss: 0

Create new training steps so we instead minimize the LSGAN loss:


In [21]:
D_loss, G_loss = lsgan_loss(logits_real, logits_fake)
D_train_step = D_solver.minimize(D_loss, var_list=D_vars)
G_train_step = G_solver.minimize(G_loss, var_list=G_vars)

In [22]:
with get_session() as sess:
    sess.run(tf.global_variables_initializer())
    run_a_gan(sess, G_train_step, G_loss, D_train_step, D_loss, G_extra_step, D_extra_step)


Iter: 0, D: 3.032, G:0.4413
Iter: 50, D: 0.1614, G:0.2621
Iter: 100, D: 0.01269, G:0.6705
Iter: 150, D: 0.1215, G:0.5179
Iter: 200, D: 0.2402, G:0.4406
Iter: 250, D: 0.1628, G:0.1469
Iter: 300, D: 0.1447, G:0.4655
Iter: 350, D: 0.106, G:0.4157
Iter: 400, D: 0.2017, G:0.6249
Iter: 450, D: 0.193, G:0.8784
Iter: 500, D: 0.09794, G:0.5992
Iter: 550, D: 0.1171, G:0.675
Iter: 600, D: 0.1279, G:0.4245
Iter: 650, D: 0.1139, G:0.1856
Iter: 700, D: 0.06292, G:0.4146
Iter: 750, D: 0.1353, G:0.3091
Iter: 800, D: 0.104, G:0.4282
Iter: 850, D: 2.715, G:0.183
Iter: 900, D: 0.1987, G:0.7306
Iter: 950, D: 0.1701, G:0.3703
Iter: 1000, D: 0.1359, G:0.4192
Iter: 1050, D: 0.2033, G:0.05397
Iter: 1100, D: 0.1726, G:0.3468
Iter: 1150, D: 0.2138, G:0.2172
Iter: 1200, D: 0.1314, G:0.2506
Iter: 1250, D: 0.1821, G:0.281
Iter: 1300, D: 0.1954, G:0.4345
Iter: 1350, D: 0.3019, G:0.02979
Iter: 1400, D: 0.1706, G:0.3175
Iter: 1450, D: 0.1968, G:0.2605
Iter: 1500, D: 0.1996, G:0.241
Iter: 1550, D: 0.2047, G:0.2369
Iter: 1600, D: 0.2311, G:0.206
Iter: 1650, D: 0.224, G:0.1996
Iter: 1700, D: 0.1814, G:0.2019
Iter: 1750, D: 0.2042, G:0.2102
Iter: 1800, D: 0.1829, G:0.2419
Iter: 1850, D: 0.2066, G:0.2167
Iter: 1900, D: 0.2109, G:0.2236
Iter: 1950, D: 0.2371, G:0.2549
Iter: 2000, D: 0.2155, G:0.2237
Iter: 2050, D: 0.1952, G:0.2084
Iter: 2100, D: 0.2136, G:0.1896
Iter: 2150, D: 0.2022, G:0.2645
Iter: 2200, D: 0.2064, G:0.1829
Iter: 2250, D: 0.1963, G:0.2297
Iter: 2300, D: 0.1936, G:0.187
Iter: 2350, D: 0.1913, G:0.2601
Iter: 2400, D: 0.2153, G:0.1504
Iter: 2450, D: 0.1944, G:0.2208
Iter: 2500, D: 0.1983, G:0.2054
Iter: 2550, D: 0.1943, G:0.1977
Iter: 2600, D: 0.2066, G:0.1808
Iter: 2650, D: 0.2103, G:0.2027
Iter: 2700, D: 0.2217, G:0.1707
Iter: 2750, D: 0.2155, G:0.1916
Iter: 2800, D: 0.2312, G:0.1831
Iter: 2850, D: 0.216, G:0.1665
Iter: 2900, D: 0.2183, G:0.2083
Iter: 2950, D: 0.2209, G:0.1807
Iter: 3000, D: 0.2161, G:0.2065
Iter: 3050, D: 0.204, G:0.1822
Iter: 3100, D: 0.1956, G:0.1979
Iter: 3150, D: 0.2267, G:0.2092
Iter: 3200, D: 0.2373, G:0.1537
Iter: 3250, D: 0.2134, G:0.1591
Iter: 3300, D: 0.2447, G:0.177
Iter: 3350, D: 0.2312, G:0.1872
Iter: 3400, D: 0.2224, G:0.1889
Iter: 3450, D: 0.211, G:0.1651
Iter: 3500, D: 0.2252, G:0.1589
Iter: 3550, D: 0.2262, G:0.161
Iter: 3600, D: 0.2275, G:0.1563
Iter: 3650, D: 0.2251, G:0.2014
Iter: 3700, D: 0.2249, G:0.1966
Iter: 3750, D: 0.2362, G:0.1891
Iter: 3800, D: 0.2394, G:0.163
Iter: 3850, D: 0.2264, G:0.1399
Iter: 3900, D: 0.2402, G:0.1945
Iter: 3950, D: 0.2438, G:0.1774
Iter: 4000, D: 0.2324, G:0.165
Iter: 4050, D: 0.2342, G:0.1526
Iter: 4100, D: 0.2365, G:0.1858
Iter: 4150, D: 0.2103, G:0.1666
Iter: 4200, D: 0.2375, G:0.1409
Iter: 4250, D: 0.2177, G:0.1799
Final images

INLINE QUESTION 1:

Describe how the visual quality of the samples changes over the course of training. Do you notice anything about the distribution of the samples? How do the results change across different training runs?

In the beginning, the network outputs similar images for different noise inputs. After further training, generated images starts to look like training images and have similar distribution.

Deep Convolutional GANs

In the first part of the notebook, we implemented an almost direct copy of the original GAN network from Ian Goodfellow. However, this network architecture allows no real spatial reasoning. It is unable to reason about things like "sharp edges" in general because it lacks any convolutional layers. Thus, in this section, we will implement some of the ideas from DCGAN, where we use convolutional networks as our discriminators and generators.

Discriminator

We will use a discriminator inspired by the TensorFlow MNIST classification tutorial, which is able to get above 99% accuracy on the MNIST dataset fairly quickly. Be sure to check the dimensions of x and reshape when needed, fully connected blocks expect [N,D] Tensors while conv2d blocks expect [N,H,W,C] Tensors.

Architecture:

  • 32 Filters, 5x5, Stride 1, Leaky ReLU(alpha=0.01)
  • Max Pool 2x2, Stride 2
  • 64 Filters, 5x5, Stride 1, Leaky ReLU(alpha=0.01)
  • Max Pool 2x2, Stride 2
  • Flatten
  • Fully Connected size 4 x 4 x 64, Leaky ReLU(alpha=0.01)
  • Fully Connected size 1

In [24]:
def discriminator(x):
    """Compute discriminator score for a batch of input images.
    
    Inputs:
    - x: TensorFlow Tensor of flattened input images, shape [batch_size, 784]
    
    Returns:
    TensorFlow Tensor with shape [batch_size, 1], containing the score 
    for an image being real for each input image.
    """
    with tf.variable_scope("discriminator"):
        
        init = tf.contrib.layers.xavier_initializer(uniform=True)
        
        x = tf.reshape(x, [-1, 28, 28, 1])
        x = tf.layers.conv2d(x, 32, 
                             kernel_size=5, 
                             activation=leaky_relu, 
                             padding='VALID',
                             kernel_initializer=init, 
                             name='conv_1')
        
        x = tf.layers.max_pooling2d(x, 
                                    pool_size=2, 
                                    strides=2, 
                                    padding='SAME', 
                                    name='maxpool_1')
        
        x = tf.layers.conv2d(x, 64, 
                             kernel_size=5, 
                             activation=leaky_relu, 
                             padding='VALID', 
                             kernel_initializer=init, 
                             name='conv_2')
        
        x = tf.layers.max_pooling2d(x, 
                                    pool_size=2, 
                                    strides=2, 
                                    padding='SAME', 
                                    name='max_pool_2')
        
        x = tf.reshape(x, [-1, 1024])
        x = tf.layers.dense(x, 1024, activation=leaky_relu, 
                            kernel_initializer=init, name='fc_1')
        logits = tf.layers.dense(x, 1, kernel_initializer=init, name='logits')
        
        return logits
test_discriminator(1102721)


Correct number of parameters in discriminator.

Generator

For the generator, we will copy the architecture exactly from the InfoGAN paper. See Appendix C.1 MNIST. See the documentation for tf.nn.conv2d_transpose. We are always "training" in GAN mode.

Architecture:

  • Fully connected of size 1024, ReLU
  • BatchNorm
  • Fully connected of size 7 x 7 x 128, ReLU
  • BatchNorm
  • Resize into Image Tensor
  • 64 conv2d^T (transpose) filters of 4x4, stride 2, ReLU
  • BatchNorm
  • 1 conv2d^T (transpose) filter of 4x4, stride 2, TanH

In [25]:
def generator(z):
    """Generate images from a random noise vector.
    
    Inputs:
    - z: TensorFlow Tensor of random noise with shape [batch_size, noise_dim]
    
    Returns:
    TensorFlow Tensor of generated images, with shape [batch_size, 784].
    """
    with tf.variable_scope("generator"):
        init = tf.contrib.layers.xavier_initializer(uniform=True)
        
        x = tf.layers.dense(z, 1024, 
                            activation=tf.nn.relu, 
                            kernel_initializer=init, 
                            name='fc_1')
        x = tf.layers.batch_normalization(x, name='batchnorm_1')
        
        x = tf.layers.dense(x, 6272, 
                            activation=tf.nn.relu, 
                            kernel_initializer=init,
                            name='fc_2')
        x = tf.layers.batch_normalization(x, name='batchnorm_2')
        
        x = tf.reshape(x, [-1, 7, 7, 128])
        x = tf.layers.conv2d_transpose(x, 64, 
                                       kernel_size=4, 
                                       strides=2, 
                                       padding='SAME', 
                                       activation=tf.nn.relu,
                                       kernel_initializer=init, 
                                       name='deconv_1')
        
        x = tf.layers.batch_normalization(x, name='batchnorm_3')
        
        x = tf.layers.conv2d_transpose(x, 1, 
                                       kernel_size=4, 
                                       strides=2, 
                                       padding='SAME', 
                                       kernel_initializer=init, 
                                       name='deconv_2')
        x = tf.tanh(x)
        img = tf.reshape(x, [-1, 784])
        
        return img
    

test_generator(6595521)


Correct number of parameters in generator.

We have to recreate our network since we've changed our functions.


In [28]:
tf.reset_default_graph()

batch_size = 128
# our noise dimension
noise_dim = 96

# placeholders for images from the training dataset
x = tf.placeholder(tf.float32, [None, 784])
z = sample_noise(batch_size, noise_dim)
# generated images
G_sample = generator(z)

with tf.variable_scope("") as scope:
    #scale images to be -1 to 1
    logits_real = discriminator(preprocess_img(x))
    # Re-use discriminator weights on new inputs
    scope.reuse_variables()
    logits_fake = discriminator(G_sample)

# Get the list of variables for the discriminator and generator
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'generator') 

D_solver,G_solver = get_solvers()
D_loss, G_loss = lsgan_loss(logits_real, logits_fake)
D_train_step = D_solver.minimize(D_loss, var_list=D_vars)
G_train_step = G_solver.minimize(G_loss, var_list=G_vars)
D_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS,'discriminator')
G_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS,'generator')

Train and evaluate a DCGAN

This is the one part of A3 that significantly benefits from using a GPU. It takes 3 minutes on a GPU for the requested five epochs. Or about 50 minutes on a dual core laptop on CPU (feel free to use 3 epochs if you do it on CPU).


In [29]:
with get_session() as sess:
    sess.run(tf.global_variables_initializer())
    run_a_gan(sess,G_train_step,G_loss,D_train_step,D_loss,G_extra_step,D_extra_step,num_epoch=3)


Iter: 0, D: 0.6558, G:0.4272
Iter: 50, D: 0.09287, G:0.2879
Iter: 100, D: 0.06088, G:0.4955
Iter: 150, D: 0.03743, G:0.4561
Iter: 200, D: 0.1027, G:0.3631
Iter: 250, D: 0.03771, G:0.5134
Iter: 300, D: 0.04762, G:0.5286
Iter: 350, D: 0.04518, G:0.4717
Iter: 400, D: 0.08176, G:0.3259
Iter: 450, D: 0.0531, G:0.5098
Iter: 500, D: 0.06243, G:0.4064
Iter: 550, D: 0.108, G:0.3008
Iter: 600, D: 0.0993, G:0.3627
Iter: 650, D: 0.1208, G:0.3205
Iter: 700, D: 0.119, G:0.2927
Iter: 750, D: 0.1225, G:0.2459
Iter: 800, D: 0.1534, G:0.243
Iter: 850, D: 0.1398, G:0.298
Iter: 900, D: 0.1449, G:0.2501
Iter: 950, D: 0.1246, G:0.3432
Iter: 1000, D: 0.1287, G:0.3099
Iter: 1050, D: 0.176, G:0.3012
Iter: 1100, D: 0.1761, G:0.3628
Iter: 1150, D: 0.159, G:0.317
Iter: 1200, D: 0.1994, G:0.2335
Iter: 1250, D: 0.1651, G:0.2545
Final images

INLINE QUESTION 2:

What differences do you see between the DCGAN results and the original GAN results?

The images generated by DCGAN are less noisy. The shapes of the digits are much clearer and smoother, but training DCGAN take much more time than an original GAN.


Extra Credit

Be sure you don't destroy your results above, but feel free to copy+paste code to get results below

  • For a small amount of extra credit, you can implement additional new GAN loss functions below, provided they converge. See AFI, BiGAN, Softmax GAN, Conditional GAN, InfoGAN, etc. They should converge to get credit.
  • Likewise for an improved architecture or using a convolutional GAN (or even implement a VAE)
  • For a bigger chunk of extra credit, load the CIFAR10 data (see last assignment) and train a compelling generative model on CIFAR-10
  • Demonstrate the value of GANs in building semi-supervised models. In a semi-supervised example, only some fraction of the input data has labels; we can supervise this in MNIST by only training on a few dozen or hundred labeled examples. This was first described in Improved Techniques for Training GANs.
  • Something new/cool.

Describe what you did here

WGAN-GP (Small Extra Credit)

Please only attempt after you have completed everything above.

We'll now look at Improved Wasserstein GAN as a newer, more stable alernative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement Algorithm 1 in the paper.

You'll also need to use a discriminator and corresponding generator without max-pooling. So we cannot use the one we currently have from DCGAN. Pair the DCGAN Generator (from InfoGAN) with the discriminator from InfoGAN Appendix C.1 MNIST (We don't use Q, simply implement the network up to D). You're also welcome to define a new generator and discriminator in this notebook, in case you want to use the fully-connected pair of D(x) and G(z) you used at the top of this notebook.

Architecture:

  • 64 Filters of 4x4, stride 2, LeakyReLU
  • 128 Filters of 4x4, stride 2, LeakyReLU
  • BatchNorm
  • Flatten
  • Fully connected 1024, LeakyReLU
  • Fully connected size 1

In [30]:
def discriminator(x):
    with tf.variable_scope('discriminator'):
        init = tf.contrib.layers.xavier_initializer()
        
        x = tf.reshape(x, [-1, 28, 28, 1])
        x = tf.layers.conv2d(x, 64, 
                             kernel_size=4, 
                             activation=leaky_relu, 
                             strides=2, 
                             padding='VALID',
                             kernel_initializer=init, 
                             name='deconv_0')
        
        x = tf.layers.conv2d(x, 128, 
                             kernel_size=4, 
                             activation=leaky_relu, 
                             strides=2, 
                             padding='VALID',
                             kernel_initializer=init, 
                             name='deconv_1')
        
        x = tf.layers.batch_normalization(x, name='batchnorm_0')
        
        x = tf.reshape(x, [-1, 3200])
        x = tf.layers.dense(x, 1024, 
                            activation=leaky_relu, 
                            kernel_initializer=init,
                            name='fc_0')
        
        logits = tf.layers.dense(x, 1, kernel_initializer=init, name='logits')
        return logits
    

test_discriminator(3411649)


Correct number of parameters in discriminator.

In [31]:
tf.reset_default_graph()

batch_size = 128
# our noise dimension
noise_dim = 96

# placeholders for images from the training dataset
x = tf.placeholder(tf.float32, [None, 784])
z = sample_noise(batch_size, noise_dim)
# generated images
G_sample = generator(z)

with tf.variable_scope("") as scope:
    #scale images to be -1 to 1
    logits_real = discriminator(preprocess_img(x))
    # Re-use discriminator weights on new inputs
    scope.reuse_variables()
    logits_fake = discriminator(G_sample)

# Get the list of variables for the discriminator and generator
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'generator')

D_solver, G_solver = get_solvers()

In [33]:
def wgangp_loss(logits_real, logits_fake, batch_size, x, G_sample):
    """Compute the WGAN-GP loss.
    
    Inputs:
    - logits_real: Tensor, shape [batch_size, 1], output of discriminator
        Log probability that the image is real for each real image
    - logits_fake: Tensor, shape[batch_size, 1], output of discriminator
        Log probability that the image is real for each fake image
    - batch_size: The number of examples in this batch
    - x: the input (real) images for this batch
    - G_sample: the generated (fake) images for this batch
    
    Returns:
    - D_loss: discriminator loss scalar
    - G_loss: generator loss scalar
    """
    
    D_loss = tf.reduce_mean(logits_fake - logits_real)
    G_loss = -tf.reduce_mean(logits_fake)

    # lambda from the paper
    lam = 10
    
    # random sample of batch_size (tf.random_uniform)
    eps = tf.random_uniform([batch_size,1], minval=0.0, maxval=1.0)
    x_hat = eps * x + (1 - eps) * G_sample

    # Gradients of Gradients is kind of tricky!
    with tf.variable_scope('',reuse=True) as scope:
        grad_D_x_hat = tf.gradients(discriminator(x_hat), x_hat)

    grad_norm = tf.norm(grad_D_x_hat[0], axis=1, ord='euclidean')
    grad_pen = tf.reduce_mean(lam * tf.square(grad_norm - 1))
    
    D_loss += grad_pen

    return D_loss, G_loss

D_loss, G_loss = wgangp_loss(logits_real, logits_fake, 128, x, G_sample)
D_train_step = D_solver.minimize(D_loss, var_list=D_vars)
G_train_step = G_solver.minimize(G_loss, var_list=G_vars)
D_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS,'discriminator')
G_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS,'generator')

In [34]:
with get_session() as sess:
    sess.run(tf.global_variables_initializer())
    run_a_gan(sess,G_train_step,G_loss,D_train_step,D_loss,G_extra_step,D_extra_step,batch_size=128,num_epoch=5)


Iter: 0, D: 8.27, G:-0.03329
Iter: 50, D: -10.46, G:6.287
Iter: 100, D: -17.24, G:-8.852
Iter: 150, D: -8.471, G:3.714
Iter: 200, D: -7.163, G:6.569
Iter: 250, D: -4.597, G:6.11
Iter: 300, D: -4.143, G:5.39
Iter: 350, D: -6.677, G:-5.917
Iter: 400, D: -4.0, G:6.944
Iter: 450, D: -3.968, G:0.3648
Iter: 500, D: -5.57, G:20.28
Iter: 550, D: -3.385, G:4.411
Iter: 600, D: -3.751, G:2.465
Iter: 650, D: -3.3, G:5.472
Iter: 700, D: -2.461, G:4.042
Iter: 750, D: -2.568, G:3.398
Iter: 800, D: -2.448, G:3.296
Iter: 850, D: -1.329, G:-2.919
Iter: 900, D: -1.978, G:3.216
Iter: 950, D: -1.923, G:3.556
Iter: 1000, D: -1.091, G:-6.363
Iter: 1050, D: -1.773, G:6.094
Iter: 1100, D: -0.9151, G:1.007
Iter: 1150, D: -0.06783, G:3.68
Iter: 1200, D: -0.2741, G:-2.619
Iter: 1250, D: -1.241, G:-1.608
Iter: 1300, D: -0.9298, G:0.6386
Iter: 1350, D: -0.5589, G:3.138
Iter: 1400, D: -0.4777, G:-1.638
Iter: 1450, D: -0.315, G:-9.136
Iter: 1500, D: -0.4978, G:16.64
Iter: 1550, D: -0.5312, G:-3.435
Iter: 1600, D: 0.5232, G:8.557
Iter: 1650, D: -0.3878, G:8.028
Iter: 1700, D: 0.5452, G:7.123
Iter: 1750, D: -0.3712, G:-5.299
Iter: 1800, D: -0.2092, G:16.82
Iter: 1850, D: 0.7834, G:-18.59
Iter: 1900, D: 1.165, G:6.23
Iter: 1950, D: -0.7611, G:-14.25
Iter: 2000, D: 0.4287, G:0.5987
Iter: 2050, D: -1.002, G:-1.315
Iter: 2100, D: 0.1285, G:11.61
Final images

In [ ]: