```
In [1]:
```% ignore - internal setup
path('../scripts', path);

```
In [ ]:
```function y = func(x)
y = x;
for i = 1:30
y = sin(x + y);
end
end

```
In [10]:
```syms x y;
y = x;
for i = 1:30
y = sin(x + y);
end
dydx = diff(y, x)

```
```

We can now evaluate the expression. In this case, we evaluate at x=0.1.

```
In [11]:
```xpt = 0.1;
dydx = vpa(subs(dydx, x, xpt), 16) % 16 significant digits

```
```

```
In [14]:
```clear;
xpt = 0.1;
x = ainit(xpt, 1); % initialize x at the point, and get 1st derivatives
y = func(x);
format long;
dydx = y{1} % pull out the first derivative

```
```

```
In [ ]:
```function [y, yd] = funcad(x)
xd = 1.0;
yd = xd;
y = x;
for i = 1:30
yd = (xd + yd)*cos(x + y);
y = sin(x + y);
end
end

```
In [16]:
```[~, dydx] = funcad(xpt)

```
```

For a simple expression like this, symbolic differentiation is long but actually works reasonbly well, and both will give a numerically exact answer. But if we change the loop to 100+ or add other complications, the symbolic solver will fail or take much longer. However, automatic differentiation will continue to work without issue. Furthermore, if we add other dimensions to the problem, symbolic differentiation quickly becomes costly as lots of computations get repeated, whereas automatic differentiation is able to reuse a lot of calculations.

As a specific example, if I change the number of iterations to 300 rather than 30, the symbolic differentiation takes 7.0 seconds, the overloaded AD takes 0.7 seconds, and the source code transformation takes 0.001 seconds. The overloaded AD is an order of magnitude faster than symbolic differentiation (and the source code transformation version is blazingly fast). In some langauges and implementations, overloaded AD speeds aren't as dramtically different as compared to source-code transformed AD.

```
In [ ]:
```