In [ ]:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

自动微分和梯度带

在上一个教程中,我们介绍了 "张量"(Tensor)及其操作。本教程涉及自动微分(automatic differentitation),它是优化机器学习模型的关键技巧之一。

创建


In [ ]:
!pip install tensorflow==2.0.0-beta1
import tensorflow as tf

梯度带

TensorFlow 为自动微分提供了 tf.GradientTape API ,根据某个函数的输入变量来计算它的导数。Tensorflow 会把 'tf.GradientTape' 上下文中执行的所有操作都记录在一个磁带上 ("tape")。 然后基于这个磁带和每次操作产生的导数,用反向微分法("reverse mode differentiation")来计算这些被“记录在案”的函数的导数。

例如:


In [ ]:
x = tf.ones((2, 2))

with tf.GradientTape() as t:
  t.watch(x)
  y = tf.reduce_sum(x)
  z = tf.multiply(y, y)

# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
  for j in [0, 1]:
    assert dz_dx[i][j].numpy() == 8.0

你也可以使用 tf.GradientTape 上下文计算过程产生的中间结果来求取导数。


In [ ]:
x = tf.ones((2, 2))

with tf.GradientTape() as t:
  t.watch(x)
  y = tf.reduce_sum(x)
  z = tf.multiply(y, y)

# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0

默认情况下,调用 GradientTape.gradient() 方法时, GradientTape 占用的资源会立即得到释放。通过创建一个持久的梯度带,可以计算同个函数的多个导数。这样在磁带对象被垃圾回收时,就可以多次调用 'gradient()' 方法。例如:


In [ ]:
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
  t.watch(x)
  y = x * x
  z = y * y
dz_dx = t.gradient(z, x)  # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x)  # 6.0
del t  # Drop the reference to the tape

记录控制流

由于磁带会记录所有执行的操作,Python 控制流(如使用 if 和 while 的代码段)自然得到了处理。


In [ ]:
def f(x, y):
  output = 1.0
  for i in range(y):
    if i > 1 and i < 5:
      output = tf.multiply(output, x)
  return output

def grad(x, y):
  with tf.GradientTape() as t:
    t.watch(x)
    out = f(x, y)
  return t.gradient(out, x)

x = tf.convert_to_tensor(2.0)

assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0

高阶导数

在 'GradientTape' 上下文管理器中记录的操作会用于自动微分。如果导数是在上下文中计算的,导数的函数也会被记录下来。因此,同个 API 可以用于高阶导数。例如:


In [ ]:
x = tf.Variable(1.0)  # Create a Tensorflow variable initialized to 1.0

with tf.GradientTape() as t:
  with tf.GradientTape() as t2:
    y = x * x * x
  # Compute the gradient inside the 't' context manager
  # which means the gradient computation is differentiable as well.
  dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)

assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0

下一步

本教程涉及 TensorFlow 中的导数计算。以此为基础,我们具备了足够的先修知识来构建和训练神经网络。