用不到 50 行代码训练 GAN(基于 PyTorch

本文作者为前谷歌高级工程师、AI 初创公司 Wavefront 创始人兼 CTO Dev Nag,介绍了他是如何用不到五十行代码,在 PyTorch 平台上完成对 GAN 的训练。

什么是 GAN?

在进入技术层面之前,为照顾新入门的开发者,先来介绍下什么是 GAN。

2014 年,Ian Goodfellow 和他在蒙特利尔大学的同事发表了一篇震撼学界的论文。没错,我说的就是《Generative Adversarial Nets》,这标志着生成对抗网络(GAN)的诞生,而这是通过对计算图和博弈论的创新性结合。他们的研究展示,给定充分的建模能力,两个博弈模型能够通过简单的反向传播(backpropagation)来协同训练。

这两个模型的角色定位十分鲜明。给定真实数据集 R,G 是生成器(generator),它的任务是生成能以假乱真的假数据;而 D 是判别器 (discriminator),它从真实数据集或者 G 那里获取数据, 然后做出判别真假的标记。Ian Goodfellow 的比喻是,G 就像一个赝品作坊,想要让做出来的东西尽可能接近真品,蒙混过关。而 D 就是文物鉴定专家,要能区分出真品和高仿(但在这个例子中,造假者 G 看不到原始数据,而只有 D 的鉴定结果——前者是在盲干)。

理想情况下,D 和 G 都会随着不断训练,做得越来越好——直到 G 基本上成为了一个“赝品制造大师”,而 D 因无法正确区分两种数据分布输给 G。

实践中,Ian Goodfellow 展示的这项技术在本质上是:G 能够对原始数据集进行一种无监督学习,找到以更低维度的方式(lower-dimensional manner)来表示数据的某种方法。而无监督学习之所以重要,就好像 Yann LeCun 的那句话:“无监督学习是蛋糕的糕体”。这句话中的蛋糕,指的是无数学者、开发者苦苦追寻的“真正的 AI”。

开始之前,我们需要导入各种包,并且初始化变量


In [1]:
# Generative Adversarial Networks (GAN) example in PyTorch.
# See related blog post at https://medium.com/@devnag/generative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f#.sch4xgsa9
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable

# Data params
data_mean = 4
data_stddev = 1.25

# Model params
g_input_size = 1     # Random noise dimension coming into generator, per output vector
g_hidden_size = 50   # Generator complexity
g_output_size = 1    # size of generated output vector
d_input_size = 100   # Minibatch size - cardinality of distributions
d_hidden_size = 50   # Discriminator complexity
d_output_size = 1    # Single dimension for 'real' vs. 'fake'
minibatch_size = d_input_size

d_learning_rate = 2e-4  # 2e-4
g_learning_rate = 2e-4
optim_betas = (0.9, 0.999)
num_epochs = 33300
print_interval = 333
d_steps = 1  # 'k' steps in the original GAN paper. Can put the discriminator on higher training freq than generator
g_steps = 1

# ### Uncomment only one of these
#(name, preprocess, d_input_func) = ("Raw data", lambda data: data, lambda x: x)
(name, preprocess, d_input_func) = ("Data and variances", lambda data: decorate_with_diffs(data, 2.0), lambda x: x * 2)

print("Using data [%s]" % (name))


Using data [Data and variances]

用 PyTorch 训练 GAN

Dev Nag:在表面上,GAN 这门如此强大、复杂的技术,看起来需要编写天量的代码来执行,但事实未必如此。我们使用 PyTorch,能够在 50 行代码以内创建出简单的 GAN 模型。这之中,其实只有五个部分需要考虑:

  • R:原始、真实数据集

  • I:作为熵的一项来源,进入生成器的随机噪音

  • G:生成器,试图模仿原始数据

  • D:判别器,试图区别 G 的生成数据和 R

我们教 G 糊弄 D、教 D 当心 G 的“训练”环。

1.) R:在我们的例子里,从最简单的 R 着手——贝尔曲线(bell curve)。它把平均数(mean)和标准差(standard deviation)作为输入,然后输出能提供样本数据正确图形(从 Gaussian 用这些参数获得 )的函数。在我们的代码例子中,我们使用 4 的平均数和 1.25 的标准差。


In [2]:
# ##### DATA: Target data and generator input data

def get_distribution_sampler(mu, sigma):
    return lambda n: torch.Tensor(np.random.normal(mu, sigma, (1, n)))  # Gaussian

2.) I:生成器的输入是随机的,为提高点难度,我们使用均匀分布(uniform distribution )而非标准分布。这意味着,我们的 Model G 不能简单地改变输入(放大/缩小、平移)来复制 R,而需要用非线性的方式来改造数据。


In [3]:
def get_generator_input_sampler():
    return lambda m, n: torch.rand(m, n)  # Uniform-dist data into generator, _NOT_ Gaussian

3.) G: 该生成器是个标准的前馈图(feedforward graph)——两层隐层,三个线性映射(linear maps)。我们使用了 ELU (exponential linear unit)。G 将从 I 获得平均分布的数据样本,然后找到某种方式来模仿 R 中标准分布的样本。


In [4]:
# ##### MODELS: Generator model and discriminator model

class Generator(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(Generator, self).__init__()
        self.map1 = nn.Linear(input_size, hidden_size)
        self.map2 = nn.Linear(hidden_size, hidden_size)
        self.map3 = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        x = F.elu(self.map1(x))
        x = F.sigmoid(self.map2(x))
        return self.map3(x)

4.) D: 判别器的代码和 G 的生成器代码很接近。一个有两层隐层和三个线性映射的前馈图。它会从 R 或 G 那里获得样本,然后输出 0 或 1 的判别值,对应反例和正例。这几乎是神经网络的最弱版本了。


In [5]:
class Discriminator(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(Discriminator, self).__init__()
        self.map1 = nn.Linear(input_size, hidden_size)
        self.map2 = nn.Linear(hidden_size, hidden_size)
        self.map3 = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        x = F.elu(self.map1(x))
        x = F.elu(self.map2(x))
        return F.sigmoid(self.map3(x))

In [6]:
# 还有一些其他的样板代码
def extract(v):
    return v.data.storage().tolist()

def stats(d):
    return [np.mean(d), np.std(d)]

def decorate_with_diffs(data, exponent):
    mean = torch.mean(data.data, 1, keepdim=True)
    mean_broadcast = torch.mul(torch.ones(data.size()), mean.tolist()[0][0])
    diffs = torch.pow(data - Variable(mean_broadcast), exponent)
    return torch.cat([data, diffs], 1)

d_sampler = get_distribution_sampler(data_mean, data_stddev)
gi_sampler = get_generator_input_sampler()
G = Generator(input_size=g_input_size, hidden_size=g_hidden_size, output_size=g_output_size)
D = Discriminator(input_size=d_input_func(d_input_size), hidden_size=d_hidden_size, output_size=d_output_size)
criterion = nn.BCELoss()  # Binary cross entropy: http://pytorch.org/docs/nn.html#bceloss
d_optimizer = optim.Adam(D.parameters(), lr=d_learning_rate, betas=optim_betas)
g_optimizer = optim.Adam(G.parameters(), lr=g_learning_rate, betas=optim_betas)

5.) 最后,训练环在两个模式中变幻:第一步,用被准确标记的真实数据 vs. 假数据训练 D;随后,训练 G 来骗过 D,这里是用的不准确标记。道友们,这是正邪之间的较量。

即便你从没接触过 PyTorch,大概也能明白发生了什么。在第一部分(for d_index in range(d_steps)循环里),我们让两种类型的数据经过 D,并对 D 的猜测 vs. 真实标记执行不同的评判标准。这是 “forward” 那一步;随后我们需要 “backward()” 来计算梯度,然后把这用来在 d_optimizer step() 中更新 D 的参数。这里,G 被使用但尚未被训练。

在最后的部分(for g_index in range(g_steps)循环里),我们对 G 执行同样的操作——注意我们要让 G 的输出穿过 D (这其实是送给造假者一个鉴定专家来练手)。但在这一步,我们并不优化、或者改变 D。我们不想让鉴定者 D 学习到错误的标记。因此,我们只执行 g_optimizer.step()。


In [7]:
for epoch in range(num_epochs):
    for d_index in range(d_steps):
        # 1. Train D on real+fake
        D.zero_grad()

        #  1A: Train D on real
        d_real_data = Variable(d_sampler(d_input_size))
        d_real_decision = D(preprocess(d_real_data))
        d_real_error = criterion(d_real_decision, Variable(torch.ones(1)))  # ones = true
        d_real_error.backward() # compute/store gradients, but don't change params

        #  1B: Train D on fake
        d_gen_input = Variable(gi_sampler(minibatch_size, g_input_size))
        d_fake_data = G(d_gen_input).detach()  # detach to avoid training G on these labels
        d_fake_decision = D(preprocess(d_fake_data.t()))
        d_fake_error = criterion(d_fake_decision, Variable(torch.zeros(1)))  # zeros = fake
        d_fake_error.backward()
        d_optimizer.step()     # Only optimizes D's parameters; changes based on stored gradients from backward()

    for g_index in range(g_steps):
        # 2. Train G on D's response (but DO NOT train D on these labels)
        G.zero_grad()

        gen_input = Variable(gi_sampler(minibatch_size, g_input_size))
        g_fake_data = G(gen_input)
        dg_fake_decision = D(preprocess(g_fake_data.t()))
        g_error = criterion(dg_fake_decision, Variable(torch.ones(1)))  # we want to fool, so pretend it's all genuine

        g_error.backward()
        g_optimizer.step()  # Only optimizes G's parameters

    if epoch % print_interval == 0:
        print("epoch: %s : D: %s/%s G: %s (Real: %s, Fake: %s) " % (epoch,
                                                            extract(d_real_error)[0],
                                                            extract(d_fake_error)[0],
                                                            extract(g_error)[0],
                                                            stats(extract(d_real_data)),
                                                            stats(extract(d_fake_data))))


epoch: 0 : D: 0.6756250262260437/0.6949571967124939 G: 0.6932275295257568 (Real: [3.797525091310963, 1.2744240667347841], Fake: [-0.06507278390228749, 0.005865310861894227]) 
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/functional.py:1006: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
  warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/functional.py:1594: UserWarning: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 1])) is deprecated. Please ensure they have the same size.
  "Please ensure they have the same size.".format(target.size(), input.size()))
epoch: 333 : D: 0.004184644669294357/0.4219677746295929 G: 1.0998070240020752 (Real: [3.8363794952631, 1.2562042785332677], Fake: [0.6141429018974304, 0.06342274141309508]) 
epoch: 666 : D: 0.0001019291375996545/0.3601710796356201 G: 1.388097882270813 (Real: [4.243743689060211, 1.1383032888874962], Fake: [-0.4117788940668106, 0.15530177699740133]) 
epoch: 999 : D: 0.00012601216440089047/0.01182420365512371 G: 4.014085292816162 (Real: [3.9179672729969024, 1.2900959436523056], Fake: [0.10195493541657924, 0.413007952306422]) 
epoch: 1332 : D: 0.013247109018266201/0.30478543043136597 G: 0.8637999296188354 (Real: [4.231501711606979, 1.2755507049959358], Fake: [2.970813472867012, 1.0525714186745927]) 
epoch: 1665 : D: 1.1288450956344604/0.7813523411750793 G: 0.5801395773887634 (Real: [3.999740565419197, 1.1499951014821739], Fake: [4.87982675909996, 1.2463250398782355]) 
epoch: 1998 : D: 0.7237128615379333/0.49139922857284546 G: 0.9581004977226257 (Real: [3.885965874195099, 1.1531454931567935], Fake: [5.384540615081787, 1.1588632799841978]) 
epoch: 2331 : D: 0.5714672803878784/0.6522347927093506 G: 1.104777455329895 (Real: [4.065389407873154, 1.1718825603658733], Fake: [4.538178300857544, 1.2360452300215965]) 
epoch: 2664 : D: 0.5752519369125366/0.7990370392799377 G: 0.716134250164032 (Real: [3.7869122195243836, 1.2971060421947551], Fake: [3.1833598524332047, 1.2410427398469996]) 
epoch: 2997 : D: 0.7283651828765869/0.9088513851165771 G: 0.5198307633399963 (Real: [3.9785392075777053, 1.2118788403306824], Fake: [4.019382575750351, 1.3765297894938897]) 
epoch: 3330 : D: 0.821840226650238/0.641017735004425 G: 0.9794363379478455 (Real: [4.181964464783668, 1.204341988794736], Fake: [4.3094582331180575, 1.2569899804238043]) 
epoch: 3663 : D: 0.6814557909965515/0.6141929626464844 G: 0.6692516803741455 (Real: [4.019061825275421, 1.1490027351431031], Fake: [3.7257178378105165, 1.189356338831795]) 
epoch: 3996 : D: 0.5968663096427917/0.7361031174659729 G: 0.6532071232795715 (Real: [3.8463756781816483, 1.229351228251971], Fake: [4.331245048046112, 1.2289062563257327]) 
epoch: 4329 : D: 0.6246540546417236/0.6336896419525146 G: 0.6794263124465942 (Real: [3.9065694040060044, 1.289087205711128], Fake: [3.737256639003754, 1.078303114738332]) 
epoch: 4662 : D: 0.8731806874275208/0.7604641318321228 G: 0.5286702513694763 (Real: [4.002717619538307, 1.3395114304928535], Fake: [4.33557056427002, 1.256379622544738]) 
epoch: 4995 : D: 0.5978611707687378/0.7346059679985046 G: 0.8002535700798035 (Real: [3.950980445146561, 1.1552338090730936], Fake: [3.948077895641327, 1.041468420536672]) 
epoch: 5328 : D: 0.5714923143386841/0.6336523294448853 G: 0.6411612629890442 (Real: [3.952063804268837, 1.1073937372810974], Fake: [4.790360021591186, 1.0393450840052039]) 
epoch: 5661 : D: 0.9503528475761414/0.6631380319595337 G: 0.5745768547058105 (Real: [3.9628926503658293, 1.1402082213788882], Fake: [3.955947287082672, 1.1909584279358496]) 
epoch: 5994 : D: 0.7266786098480225/0.7695509791374207 G: 0.6525369882583618 (Real: [3.7974710422754288, 1.359039245881663], Fake: [3.8569750797748568, 1.1315972319489287]) 
epoch: 6327 : D: 0.3753223717212677/0.5862118005752563 G: 1.1246390342712402 (Real: [3.9452745443582535, 1.4064816950806198], Fake: [4.166705198287964, 1.4803319198291416]) 
epoch: 6660 : D: 0.8460464477539062/0.721608579158783 G: 0.8986248970031738 (Real: [3.8976517128944397, 1.1225475625127024], Fake: [4.168817678093911, 1.4705896511863403]) 
epoch: 6993 : D: 0.7471769452095032/0.7444184422492981 G: 0.6832404136657715 (Real: [4.108252413272858, 0.9507818671130699], Fake: [3.522575467824936, 0.9907852073872903]) 
epoch: 7326 : D: 0.5424266457557678/0.4755423367023468 G: 0.7527096271514893 (Real: [3.896732639372349, 1.3479435200422167], Fake: [4.527911084890365, 1.2265989834744258]) 
epoch: 7659 : D: 0.6305518746376038/0.6114867329597473 G: 0.8793816566467285 (Real: [4.0699305775761605, 1.2054799872498871], Fake: [3.974954572916031, 1.1412587514651067]) 
epoch: 7992 : D: 0.589032769203186/0.5701113343238831 G: 0.7259480357170105 (Real: [4.041962708830834, 1.2090574273223398], Fake: [3.927955323457718, 1.2257643615010048]) 
epoch: 8325 : D: 1.0093719959259033/0.7181153297424316 G: 0.7375578880310059 (Real: [4.07252063035965, 1.312615231823646], Fake: [4.319156523942947, 1.3978933517810779]) 
epoch: 8658 : D: 0.24075759947299957/0.5760841369628906 G: 0.7013813853263855 (Real: [3.741774910092354, 1.2784373735889016], Fake: [3.5007342541217805, 1.2120288240248058]) 
epoch: 8991 : D: 0.5922752022743225/0.7478154897689819 G: 0.4110085368156433 (Real: [4.043490694761276, 1.3368493389479759], Fake: [4.445062127113342, 1.3100577626159646]) 
epoch: 9324 : D: 0.530786395072937/0.5866711139678955 G: 0.6554206013679504 (Real: [3.872964341044426, 1.263220399640566], Fake: [4.297151064872741, 1.0440316757524888]) 
epoch: 9657 : D: 0.4428011178970337/1.572737455368042 G: 0.6727628111839294 (Real: [4.274622405767441, 1.2045360476748452], Fake: [3.6428915750980377, 1.3732983815567235]) 
epoch: 9990 : D: 1.1048412322998047/0.7209106087684631 G: 0.9201861023902893 (Real: [3.9316427433490753, 1.2199739494089814], Fake: [4.282348967790604, 1.2877234187884463]) 
epoch: 10323 : D: 0.7072091698646545/0.6743326187133789 G: 0.569561243057251 (Real: [4.018747386336327, 1.1265110798486715], Fake: [3.7915724581480026, 1.2510349071682885]) 
epoch: 10656 : D: 0.782289981842041/0.8664907217025757 G: 0.6822099089622498 (Real: [3.980665715932846, 1.141724894655334], Fake: [3.858938668370247, 1.2624009826767304]) 
epoch: 10989 : D: 0.5550408363342285/0.5886823534965515 G: 0.9129071831703186 (Real: [3.878229733146727, 1.271487490674731], Fake: [4.069242226481438, 1.340088996937484]) 
epoch: 11322 : D: 0.9955084323883057/0.6235008835792542 G: 0.7287721633911133 (Real: [3.8913443756103514, 1.303521906380255], Fake: [4.634545825719833, 1.2783227973610638]) 
epoch: 11655 : D: 0.425062894821167/0.5097630023956299 G: 0.6149874329566956 (Real: [4.075950679779052, 1.3884703596380636], Fake: [3.89657709300518, 1.1785644308603092]) 
epoch: 11988 : D: 0.5355099439620972/0.7621915936470032 G: 0.8367519974708557 (Real: [3.9884157586097717, 1.143393702152221], Fake: [3.697555334568024, 1.1523859376037104]) 
epoch: 12321 : D: 0.29849347472190857/0.511410653591156 G: 0.4334257245063782 (Real: [4.002051368355751, 1.1678267993656215], Fake: [4.097470457553864, 1.2804577405697963]) 
epoch: 12654 : D: 0.9414968490600586/0.5398526787757874 G: 0.8518809676170349 (Real: [4.039852044582367, 1.1241214448281922], Fake: [4.238243347406387, 1.268782115693599]) 
epoch: 12987 : D: 0.8157317638397217/0.5688516497612 G: 0.5382463932037354 (Real: [4.035471341311932, 1.2284086931343055], Fake: [3.9582596719264984, 1.2723225302013321]) 
epoch: 13320 : D: 0.8691128492355347/0.5538142919540405 G: 0.956184446811676 (Real: [3.9161506581306456, 1.3369675292240888], Fake: [3.747518518567085, 1.2778569671860183]) 
epoch: 13653 : D: 0.44237521290779114/0.3266385793685913 G: 0.2961215078830719 (Real: [3.954829429388046, 1.2693081904817396], Fake: [4.04792136490345, 1.3173204343948768]) 
epoch: 13986 : D: 0.5836899280548096/0.39506155252456665 G: 0.803589403629303 (Real: [3.8307071113586426, 1.2176719788778423], Fake: [3.812936001420021, 1.2948393391875812]) 
epoch: 14319 : D: 0.13900716602802277/0.6253240704536438 G: 0.6826146841049194 (Real: [4.190562291741371, 1.5416367057010134], Fake: [3.735645644068718, 1.2223517849386567]) 
epoch: 14652 : D: 0.8473951816558838/0.8165951371192932 G: 0.6455872654914856 (Real: [3.95789337515831, 1.2137376257212162], Fake: [4.126859483122826, 1.3153520032692214]) 
epoch: 14985 : D: 0.2806280553340912/0.3345256447792053 G: 1.0388591289520264 (Real: [3.9119925561547277, 1.2663213194487148], Fake: [4.24225417137146, 1.1956142135175551]) 
epoch: 15318 : D: 1.6741622686386108/0.43404027819633484 G: 0.8617938160896301 (Real: [4.09971764266491, 1.336432404099796], Fake: [3.998367471694946, 1.176248703365973]) 
epoch: 15651 : D: 1.2029083967208862/0.5559860467910767 G: 0.7374796271324158 (Real: [3.986091250181198, 1.3732707711161942], Fake: [4.082338889837265, 1.246675114439523]) 
epoch: 15984 : D: 1.1035114526748657/0.442852258682251 G: 0.6722480058670044 (Real: [4.3609007036685945, 1.1952795938068546], Fake: [4.262604860067367, 1.0985021760329996]) 
epoch: 16317 : D: 0.9145662784576416/0.6762352585792542 G: 1.211079716682434 (Real: [3.845157763361931, 1.1558184323930347], Fake: [4.126611819267273, 1.1616643208276343]) 
epoch: 16650 : D: 0.45831021666526794/0.4365103542804718 G: 1.1993489265441895 (Real: [3.8041795498132704, 1.1401371232437258], Fake: [4.360615882873535, 1.0716932063427658]) 
epoch: 16983 : D: 0.704085648059845/0.4024212062358856 G: 1.2893003225326538 (Real: [4.00688290655613, 1.3347567722505858], Fake: [3.766740233898163, 1.3075194208997163]) 
epoch: 17316 : D: 0.6377599239349365/0.47763633728027344 G: 1.1389269828796387 (Real: [4.01280187010765, 1.2292688896310613], Fake: [4.217965602874756, 1.1316529351144025]) 
epoch: 17649 : D: 0.42861706018447876/0.8288334608078003 G: 0.4950083792209625 (Real: [4.1326367688179015, 1.2950689670128053], Fake: [3.9005602753162383, 1.3371506031401348]) 
epoch: 17982 : D: 0.411531925201416/0.3856951594352722 G: 1.276703953742981 (Real: [4.169259392023086, 1.1223556726345614], Fake: [4.184094955921173, 1.2290863984410296]) 
epoch: 18315 : D: 0.011615841649472713/0.20495493710041046 G: 1.1552231311798096 (Real: [3.884097787737846, 1.3083043005348036], Fake: [4.248319854736328, 1.2105403865635866]) 
epoch: 18648 : D: 0.1451406329870224/0.7529121041297913 G: 1.2166028022766113 (Real: [4.0189475667476655, 1.130323822758671], Fake: [3.653565917015076, 1.2187753827106902]) 
epoch: 18981 : D: 0.06418471038341522/2.5845789909362793 G: 2.4598724842071533 (Real: [3.8720359992980957, 1.2165502306126925], Fake: [3.7838366270065307, 1.4344420862189045]) 
epoch: 19314 : D: 1.0279525518417358/0.3471081852912903 G: 1.5880794525146484 (Real: [4.002955752015114, 1.2198983396496714], Fake: [4.1179825460910795, 1.2181192904992109]) 
epoch: 19647 : D: 0.3727129101753235/0.41133439540863037 G: 0.6831838488578796 (Real: [3.8283260837197304, 1.1880975967755751], Fake: [3.996099455356598, 1.4163505792493274]) 
epoch: 19980 : D: 0.1789671629667282/0.29737588763237 G: 1.4680149555206299 (Real: [3.8773398876190184, 1.3346618621778186], Fake: [3.839455431699753, 1.058501079234222]) 
epoch: 20313 : D: 0.3092723488807678/0.612485408782959 G: 0.4970780313014984 (Real: [3.834235247373581, 1.132903569732207], Fake: [3.7507175743579864, 1.510290623653538]) 
epoch: 20646 : D: 0.6937085390090942/0.4742297828197479 G: 1.35728120803833 (Real: [4.039387381076812, 1.23039372564928], Fake: [4.369354741573334, 1.1845666903322054]) 
epoch: 20979 : D: 0.08084474503993988/0.40201589465141296 G: 1.4511408805847168 (Real: [4.026229251623153, 1.2826032969453376], Fake: [3.958843162059784, 1.2268715635154055]) 
epoch: 21312 : D: 0.6774250268936157/0.32094958424568176 G: 1.624078392982483 (Real: [3.95344273686409, 1.2735493526565893], Fake: [4.036163539886474, 1.269278252368023]) 
epoch: 21645 : D: 1.2363476753234863/0.8782679438591003 G: 1.3176521062850952 (Real: [4.066971597671508, 1.2837757280983781], Fake: [4.364451236724854, 1.2131490912282017]) 
epoch: 21978 : D: 0.30432698130607605/0.5196202397346497 G: 1.9152182340621948 (Real: [4.140865463018417, 1.2923027633185276], Fake: [4.539796953201294, 1.2396718394028532]) 
epoch: 22311 : D: 0.43575236201286316/0.8084354400634766 G: 2.0809824466705322 (Real: [4.0670858514308925, 1.2698950467535033], Fake: [4.138987782001496, 1.3804381047721088]) 
epoch: 22644 : D: 0.4116808772087097/0.37707048654556274 G: 1.7328863143920898 (Real: [3.9934759950637817, 1.3398949462340488], Fake: [3.8074366307258605, 1.210168962764556]) 
epoch: 22977 : D: 0.5070127844810486/0.33242517709732056 G: 0.9199877381324768 (Real: [3.9128821125626563, 1.2232375076272237], Fake: [4.300797667503357, 1.0019317072899698]) 
epoch: 23310 : D: 0.6993991732597351/0.2208181619644165 G: 0.842004120349884 (Real: [4.00804698228836, 1.1915564201080338], Fake: [4.521379454135895, 1.0803991065623797]) 
epoch: 23643 : D: 1.3641899824142456/0.4538697600364685 G: 1.1242088079452515 (Real: [3.855342748761177, 1.0956208305247124], Fake: [3.9253289783000946, 1.2636587288659709]) 
epoch: 23976 : D: 0.034040264785289764/0.1729264110326767 G: 0.5425103306770325 (Real: [3.760840779840946, 1.342745489951146], Fake: [4.598063752651215, 1.4404670016642642]) 
epoch: 24309 : D: 0.2801908850669861/1.651077151298523 G: 1.4400814771652222 (Real: [4.160738716125488, 1.1595960905510416], Fake: [4.656981911659241, 1.4300053465747093]) 
epoch: 24642 : D: 0.695292592048645/0.15069334208965302 G: 0.5891402363777161 (Real: [3.8070837101340294, 1.1481269368667961], Fake: [3.7404602873325348, 1.2627998411129304]) 
epoch: 24975 : D: 0.41083386540412903/0.9364608526229858 G: 0.8411158323287964 (Real: [3.98140393614769, 0.9991655907743979], Fake: [4.454579248428344, 1.1898855264681174]) 
epoch: 25308 : D: 0.13033081591129303/0.28483542799949646 G: 1.358892798423767 (Real: [4.068942202925682, 1.1898490009171305], Fake: [4.80198536157608, 1.2432162457186677]) 
epoch: 25641 : D: 0.5154428482055664/0.4573822021484375 G: 0.5824310779571533 (Real: [3.968116585612297, 1.3428089817756619], Fake: [3.8019103956222535, 1.210032540739773]) 
epoch: 25974 : D: 0.3914489150047302/0.4062656760215759 G: 2.6713249683380127 (Real: [3.880593699812889, 1.1887935951368158], Fake: [4.802889897823333, 1.1336753348595603]) 
epoch: 26307 : D: 0.1879076361656189/0.8102118372917175 G: 0.3506869971752167 (Real: [3.9404853522777556, 1.276655760851264], Fake: [4.492211949825287, 1.2284339802946143]) 
epoch: 26640 : D: 0.45084148645401/2.1223926544189453 G: 0.4928738474845886 (Real: [4.138060618638992, 1.2856796478510077], Fake: [3.545085780620575, 1.228704863643435]) 
epoch: 26973 : D: 0.4740205407142639/0.1903870552778244 G: 1.4043492078781128 (Real: [4.045267324149608, 1.246840806854423], Fake: [4.249826767444611, 1.4465672430058338]) 
epoch: 27306 : D: 0.11324291676282883/0.06111345440149307 G: 3.83959698677063 (Real: [3.998654755949974, 1.1417655666843833], Fake: [5.763360204696656, 1.615486995475312]) 
epoch: 27639 : D: 0.004608625080436468/0.0031187240965664387 G: 5.800661563873291 (Real: [3.8577286982536316, 1.204855782358159], Fake: [6.664654846191406, 1.5773415977276128]) 
epoch: 27972 : D: 0.022302670404314995/0.0007002420607022941 G: 6.553737163543701 (Real: [4.159944670200348, 1.133245017959185], Fake: [7.975712637901307, 1.8152867086654043]) 
epoch: 28305 : D: 0.04149409756064415/0.004769116174429655 G: 5.366230010986328 (Real: [3.889107602238655, 1.2324347700882856], Fake: [8.196344857215882, 0.6021986392058296]) 
epoch: 28638 : D: 0.0012890059733763337/0.21236945688724518 G: 3.1010520458221436 (Real: [3.704421754255891, 1.2771897045084946], Fake: [3.9737429881095885, 0.9184150806326556]) 
epoch: 28971 : D: 0.664581835269928/0.7502431273460388 G: 0.6582263708114624 (Real: [3.8210327023267747, 1.3908029816533451], Fake: [3.3618110430240633, 1.309070084333288]) 
epoch: 29304 : D: 0.8415417671203613/0.42927178740501404 G: 0.9641934037208557 (Real: [3.9394747006893156, 1.316123530030424], Fake: [5.422983114719391, 1.5251743402441007]) 
epoch: 29637 : D: 0.6471612453460693/0.22627073526382446 G: 1.1404048204421997 (Real: [4.088570955097675, 1.2931024602293821], Fake: [4.7354105186462405, 1.254580158557504]) 
epoch: 29970 : D: 0.5351102352142334/0.5531942248344421 G: 0.699086606502533 (Real: [4.071232787370682, 1.3337859480763954], Fake: [2.962063527107239, 1.0473134550205083]) 
epoch: 30303 : D: 0.3803131878376007/1.0837584733963013 G: 0.6992735862731934 (Real: [4.049097361564637, 1.1684475140548392], Fake: [5.115203702449799, 1.324172335452279]) 
epoch: 30636 : D: 0.8405230641365051/0.6195626854896545 G: 0.7905600666999817 (Real: [4.06530257821083, 1.3003891165260986], Fake: [3.9375458359718323, 1.3644381135256312]) 
epoch: 30969 : D: 0.39165762066841125/0.3674534261226654 G: 0.5428770780563354 (Real: [4.129139721393585, 1.37544461971971], Fake: [3.6902462816238404, 1.1191120421154548]) 
epoch: 31302 : D: 0.4465530514717102/0.5724210143089294 G: 0.6240807175636292 (Real: [4.12571477651596, 1.2462429951426266], Fake: [4.547770493030548, 1.382627010571463]) 
epoch: 31635 : D: 0.8665892481803894/0.7576417326927185 G: 0.6840209364891052 (Real: [4.245099468231201, 1.2046027455146942], Fake: [3.5214470291137694, 1.3116071880815385]) 
epoch: 31968 : D: 0.6414663791656494/0.4806385040283203 G: 1.0754262208938599 (Real: [4.065007742643356, 1.2467081633192736], Fake: [4.894264986515045, 1.504646858566065]) 
epoch: 32301 : D: 0.8463589549064636/1.1628025770187378 G: 0.5268152952194214 (Real: [3.8963173246383667, 1.3411025860821173], Fake: [3.4054945063591004, 1.2560014185416148]) 
epoch: 32634 : D: 0.6891605257987976/0.6151688098907471 G: 1.0796968936920166 (Real: [4.106876718997955, 1.3703766342819559], Fake: [4.877430226802826, 1.4686220796670637]) 
epoch: 32967 : D: 0.49122941493988037/1.0000280141830444 G: 0.39396417140960693 (Real: [3.7519747018814087, 1.243886374423885], Fake: [3.4819220113754272, 1.2792157348342004]) 

在 D 和 G 之间几千轮交手之后,我们会得到什么?判别器 D 会快速改进,而 G 的进展要缓慢许多。但当模型达到一定性能之后,G 才有了个配得上的对手,并开始提升,巨幅提升。