Examples of the supported features in Autograd

Before using Autograd for more complicated calculations, it might be useful to experiment with what kind of functions Autograd is capable of finding the gradient of. The following Python functions are just meant to illustrate what Autograd can do, but please feel free to experiment with other, possibly more complicated, functions as well!


In [1]:
import autograd.numpy as np
from autograd import grad

Supported functions

Here are some examples of supported function implementations that Autograd can differentiate. Keep in mind that this list over examples is not comprehensive, but rather explores which basic constructions one might often use.

Functions using simple arithmetics


In [15]:
def f1(x):
    return x**3 + 1

In [62]:
f1_grad = grad(f1)

# Remember to send in float as argument to the computed gradient from Autograd!
a = 1.0

# See the evaluated gradient at a using autograd:
print("The gradient of f1 evaluated at a = %g using autograd is: %g"%(a,f1_grad(a)))

# Compare with the analytical derivative, that is f1'(x) = 3*x**2 
grad_analytical = 3*a**2
print("The gradient of f1 evaluated at a = %g by finding the analytic expression is: %g"%(a,grad_analytical))


The gradient of f1 evaluated at a = 1 using autograd is: 3
The gradient of f1 evaluated at a = 1 by finding the analytic expression is: 3

Functions with two (or more) arguments

To differentiate with respect to two (or more) arguments of a Python function, Autograd need to know at which variable the function if being differentiated with respect to.


In [13]:
def f2(x1,x2):
    return 3*x1**3 + x2*(x1 - 5) + 1

In [14]:
# By sending the argument 0, Autograd will compute the derivative w.r.t the first variable, in this case x1
f2_grad_x1 = grad(f2,0)

# ... and differentiate w.r.t x2 by sending 1 as an additional arugment to grad
f2_grad_x2 = grad(f2,1)

x1 = 1.0
x2 = 3.0 

print("Evaluating at x1 = %g, x2 = %g"%(x1,x2))
print("-"*30)

# Compare with the analytical derivatives:

# Derivative of f2 w.r.t x1 is: 9*x1**2 + x2:
f2_grad_x1_analytical = 9*x1**2 + x2

# Derivative of f2 w.r.t x2 is: x1 - 5:
f2_grad_x2_analytical = x1 - 5

# See the evaluated derivations:
print("The derivative of f2 w.r.t x1: %g"%( f2_grad_x1(x1,x2) ))
print("The analytical derivative of f2 w.r.t x1: %g"%( f2_grad_x1(x1,x2) ))

print()

print("The derivative of f2 w.r.t x2: %g"%( f2_grad_x2(x1,x2) ))
print("The analytical derivative of f2 w.r.t x2: %g"%( f2_grad_x2(x1,x2) ))


Evaluating at x1 = 1, x2 = 3
------------------------------
The derivative of f2 w.r.t x1: 12
The analytical derivative of f2 w.r.t x1: 12

The derivative of f2 w.r.t x2: -4
The analytical derivative of f2 w.r.t x2: -4

Note that the grad function will not produce the true gradient of the function. The true gradient of a function with two or more variables will produce a vector, where each element is the function differentiated w.r.t a variable.

Functions using the elements of its argument directly


In [12]:
def f3(x): # Assumes x is an array of length 5 or higher
    return 2*x[0] + 3*x[1] + 5*x[2] + 7*x[3] + 11*x[4]**2

In [66]:
f3_grad = grad(f3)

x = np.linspace(0,4,5)

# Print the computed gradient:
print("The computed gradient of f3 is: ", f3_grad(x))

# The analytical gradient is: (2, 3, 5, 7, 22*x[4])
f3_grad_analytical = np.array([2, 3, 5, 7, 22*x[4]])

# Print the analytical gradient:
print("The analytical gradient of f3 is: ", f3_grad_analytical)


The computed gradient of f3 is:  [ 2.  3.  5.  7. 88.]
The analytical gradient of f3 is:  [ 2.  3.  5.  7. 88.]

Note that in this case, when sending an array as input argument, the output from Autograd is another array. This is the true gradient of the function, as opposed to the function in the previous example. By using arrays to represent the variables, the output from Autograd might be easier to work with, as the output is closer to what one could expect form a gradient-evaluting function.

Functions using mathematical functions from Numpy


In [10]:
def f4(x):
    return np.sqrt(1+x**2) + np.exp(x) + np.sin(2*np.pi*x)

In [81]:
f4_grad = grad(f4)

x = 2.7

# Print the computed derivative:
print("The computed derivative of f4 at x = %g is: %g"%(x,f4_grad(x)))

# The analytical derivative is: x/sqrt(1 + x**2) + exp(x) + cos(2*pi*x)*2*pi
f4_grad_analytical = x/np.sqrt(1 + x**2) + np.exp(x) + np.cos(2*np.pi*x)*2*np.pi

# Print the analytical gradient:
print("The analytical gradient of f4 at x = %g is: %g"%(x,f4_grad_analytical))


The computed derivative of f4 at x = 2.7 is: 13.8759
The analytical gradient of f4 is:  13.87586944687107

Functions using if-else tests


In [75]:
def f5(x):
    if x >= 0:
        return x**2
    else:
        return -3*x + 1

In [76]:
f5_grad = grad(f5)

x = 2.7

# Print the computed derivative:
print("The computed derivative of f5 at x = %g is: %g"%(x,f5_grad(x)))

# The analytical derivative is: 
# if x >= 0, then 2*x
# else -3

if x >= 0:
    f5_grad_analytical = 2*x
else:
    f5_grad_analytical = -3


# Print the analytical derivative:
print("The analytical derivative of f5 at x = %g is: %g"%(x,f5_grad_analytical))


The computed derivative of f5 is:  5.4
The analytical derivative of f5 is:  5.4

Functions using for- and while loops


In [88]:
def f6_for(x):
    val = 0
    for i in range(10):
        val = val + x**i
    return val

def f6_while(x):
    val = 0
    i = 0
    while i < 10:
        val = val + x**i
        i = i + 1
    return val

In [87]:
f6_for_grad = grad(f6_for)
f6_while_grad = grad(f6_while)

x = 0.5

# Print the computed derivaties of f6_for and f6_while
print("The computed derivative of f6_for at x = %g is: %g"%(x,f6_for_grad(x)))
print("The computed derivative of f6_while at x = %g is: %g"%(x,f6_while_grad(x)))

# Both of the functions are implementation of the sum: sum(x**i) for i = 0, ..., 9
# The analytical derivative is: sum(i*x**(i-1)) 
f6_grad_analytical = 0
for i in range(10):
    f6_grad_analytical += i*x**(i-1)

print("The analytical derivative of f6 at x = %g is: %g"%(x,f6_grad_analytical))


The computed derivative of f6_for at x = 0.5 is: 3.95703
The computed derivative of f6_while at x = 0.5 is: 3.95703
The analytical derivative of f6 at x = 0.5 is: 3.95703

Functions using recursion


In [116]:
def f7(n): # Assume that n is an integer
    if n == 1 or n == 0:
        return 1
    else:
        return n*f7(n-1)

In [115]:
f7_grad = grad(f7)

n = 2.0

print("The computed derivative of f7 at n = %d is: %g"%(n,f7_grad(n)))

# The function f7 is an implementation of the factorial of n.
# By using the product rule, one can find that the derivative is:

f7_grad_analytical = 0
for i in range(int(n)-1):
    tmp = 1
    for k in range(int(n)-1):
        if k != i:
            tmp *= (n - k)
    f7_grad_analytical += tmp

print("The analytical derivative of f7 at n = %d is: %g"%(n,f7_grad_analytical))


The computed derivative of f7 at n = 2 is: 1
The analytical derivative of f7 at n = 2 is: 1

Note that if n is equal to zero or one, Autograd will give an error message. This message appears when the output is independent on input.

Unsupported functions

Autograd supports many features. However, there are some functions that is not supported (yet) by Autograd.

Assigning a value to the variable being differentiated with respect to


In [8]:
def f8(x): # Assume x is an array
    x[2] = 3
    return x*2

In [9]:
f8_grad = grad(f8)

x = 8.4

print("The derivative of f8 is:",f8_grad(x))


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-9-40cbd60623bc> in <module>()
      3 x = 8.4
      4 
----> 5 print("The derivative of f8 is:",f8_grad(x))

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/wrap_util.py in nary_f(*args, **kwargs)
     18             else:
     19                 x = tuple(args[i] for i in argnum)
---> 20             return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
     21         return nary_f
     22     return nary_operator

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/differential_operators.py in grad(fun, x)
     22     arguments as `fun`, but returns the gradient instead. The function `fun`
     23     should be scalar-valued. The gradient has the same type as the argument."""
---> 24     vjp, ans = _make_vjp(fun, x)
     25     if not vspace(ans).size == 1:
     26         raise TypeError("Grad only applies to real scalar-output functions. "

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/core.py in make_vjp(fun, x)
      8 def make_vjp(fun, x):
      9     start_node = VJPNode.new_root(x)
---> 10     end_value, end_node =  trace(start_node, fun, x)
     11     if end_node is None:
     12         def vjp(g): return vspace(x).zeros()

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/tracer.py in trace(start_node, fun, x)
      8     with trace_stack.new_trace() as t:
      9         start_box = new_box(x, t, start_node)
---> 10         end_box = fun(start_box)
     11         if isbox(end_box) and end_box._trace == start_box._trace:
     12             return end_box._value, end_box._node

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/wrap_util.py in unary_f(x)
     13                 else:
     14                     subargs = subvals(args, zip(argnum, x))
---> 15                 return fun(*subargs, **kwargs)
     16             if isinstance(argnum, int):
     17                 x = args[argnum]

<ipython-input-8-00b0c64acf16> in f8(x)
      1 def f8(x): # Assume x is an array
----> 2     x[2] = 3
      3     return x*2

TypeError: 'ArrayBox' object does not support item assignment

Here, Autograd tells us that an 'ArrayBox' does not support item assignment. The item assignment is done when the program tries to assign x[2] to the value 3. However, Autograd has implemented the computation of the derivative such that this assignment is not possible.

The syntax a.dot(b) when finding the dot product


In [25]:
def f9(a): # Assume a is an array with 2 elements
    b = np.array([1.0,2.0])
    return a.dot(b)

In [26]:
f9_grad = grad(f9)

x = np.array([1.0,0.0])

print("The derivative of f9 is:",f9_grad(x))


---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-26-d75ebc85c124> in <module>()
      3 x = np.array([1.0,0.0])
      4 
----> 5 print("The derivative of f9 is:",f9_grad(x))

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/wrap_util.py in nary_f(*args, **kwargs)
     18             else:
     19                 x = tuple(args[i] for i in argnum)
---> 20             return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
     21         return nary_f
     22     return nary_operator

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/differential_operators.py in grad(fun, x)
     22     arguments as `fun`, but returns the gradient instead. The function `fun`
     23     should be scalar-valued. The gradient has the same type as the argument."""
---> 24     vjp, ans = _make_vjp(fun, x)
     25     if not vspace(ans).size == 1:
     26         raise TypeError("Grad only applies to real scalar-output functions. "

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/core.py in make_vjp(fun, x)
      8 def make_vjp(fun, x):
      9     start_node = VJPNode.new_root(x)
---> 10     end_value, end_node =  trace(start_node, fun, x)
     11     if end_node is None:
     12         def vjp(g): return vspace(x).zeros()

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/tracer.py in trace(start_node, fun, x)
      8     with trace_stack.new_trace() as t:
      9         start_box = new_box(x, t, start_node)
---> 10         end_box = fun(start_box)
     11         if isbox(end_box) and end_box._trace == start_box._trace:
     12             return end_box._value, end_box._node

/home/krisetine/anaconda3/lib/python3.6/site-packages/autograd/wrap_util.py in unary_f(x)
     13                 else:
     14                     subargs = subvals(args, zip(argnum, x))
---> 15                 return fun(*subargs, **kwargs)
     16             if isinstance(argnum, int):
     17                 x = args[argnum]

<ipython-input-25-a52dde86db0e> in f9(a)
      1 def f9(a): # Assume a is an array with 2 elements
      2     b = np.array([1.0,2.0])
----> 3     return a.dot(b)

AttributeError: 'ArrayBox' object has no attribute 'dot'

Here we are told that the 'dot' function does not belong to Autograd's version of a Numpy array.
To overcome this, an alternative syntax which also computed the dot product can be used:


In [31]:
def f9_alternative(x): # Assume a is an array with 2 elements
    b = np.array([1.0,2.0])
    return np.dot(x,b) # The same as x_1*b_1 + x_2*b_2

In [32]:
f9_alternative_grad = grad(f9_alternative)

x = np.array([3.0,0.0])

print("The gradient of f9 is:",f9_alternative_grad(x))

# The analytical gradient of the dot product of vectors x and b with two elements (x_1,x_2) and (b_1, b_2) respectively
# w.r.t x is (b_1, b_2).


The gradient of f9 is: [1. 2.]

The documentation recommends to avoid inplace operations such as

a += b a -= b a*= b a /=b