In [1]:
import numpy as np
import matplotlib.pyplot as plt
To use a root-finding method, we must make our equation have one side be 0.
$$\cos (x) - x = 0$$
In [2]:
from scipy.optimize import newton
root = newton(lambda x: np.cos(x) - x, x0=0)
print(root, np.cos(root) - root)
Source: Wikipedia, Newton's Method
This method, like all this unit are iterative. This is modifiable by you, either through choosing tolerance, or maximum iterations. The functions have sensible defaults
In [3]:
root = newton(lambda x: np.cos(x) - x, x0=0, tol=1e-3)
print(root, np.cos(root) - root)
In [4]:
root = newton(lambda x: np.cos(x) - x, x0=0, tol=1e-3, maxiter=5)
print(root, np.cos(root) - root)
Let's try to solve a more interesting equation with Newton's method!
$$ \int_0^x e^{-s^2}\,ds = \frac{1}{2}$$Find $x$
In [5]:
#Note: this code is intentionally really bad
from scipy.integrate import quad
def equation(x):
return quad(lambda s: np.exp(-s**2), 0, x)
root = newton(equation, x0=0)
In [6]:
def version_1(x):
return x**2 - 3 *x + 2
version_2 = lambda x: x ** 2 - 3 *x + 2
np_version = np.vectorize(version_1)
print(version_1(3.))
print(version_2(3.))
some_threes = np.zeros(10)
some_threes[:] = 3.0 # -> Notice how we don't replace the numpy array with 3, but instead make all elements in it equal to three
#some_threes = 3 -> This would delete the numpy array and now some_threes would be a single 3
print(np_version(some_threes))
In [7]:
from scipy.integrate import quad
import numpy as np
from math import pi
def integrate_sin2(x):
ans, err = quad(lambda s: np.sin(s) ** 2, pi, x)
return ans
print(integrate_sin2(2 * pi))
2 Vectors, $\vec{x}$ and $\vec{y}$, where one component of $\vec{x}$ is changing and we want to know the distance between the two vectors.
In [8]:
from math import sqrt
def distance(x, y):
sum_sq = np.sum((x - y)**2)
return sqrt(sum_sq)
def my_distance(s):
x = np.zeros(3)
y = np.zeros(3)
x[0] = 2.0
x[1] = s
x[2] = -3.5
y[0] = 1.0
y[1] = -3.0
y[2] = 0.0
return distance(x, y)
print(my_distance(1))
In [9]:
def version_1(x):
x**2 - 3 *x + 2
print(version_1(3.))
In [10]:
def integrate_sin2(x):
return quad(lambda s: np.sin(s) ** 2, pi, x)
print(integrate_sin2(2 * pi))
In [11]:
def distance(x, y):
sum_sq = np.sum((x - y)**2)
return sqrt(sum_sq)
print(distance(1))
Let's return to our example from above:
$$ \int_0^x e^{-s^2}\,ds = \frac{1}{2}$$Find $x$
In [12]:
#note still wrong
def equation(x):
ans, err = quad(lambda s: np.exp(-s**2), 0, x)
return ans
root = newton(equation, x0=0)
print(root)
In [13]:
equation(root)
Out[13]:
We forgot to rearrange the equation to be equal to $0$
In [14]:
root = newton(lambda x: equation(x) - 0.5, x0=0)
print(root, equation(root))
In [15]:
x = 4
y = 2
#Right now, there is x,y and all other functions/variables defined or imported above in the scope
In [16]:
x = 4
y = 2
#Here, I have x and y in scope
def scope_example():
z = 2
#Here, I have x,y and z in scope
#Here I again have only x and y in scope
In [17]:
x = 4
y = 2
print(y,"Before function")
def scope_example():
y = 25 #This is a new version of y that exists only in this scope
print(y, "Inside function")
scope_example()
print(y, "After Function")
In [18]:
x = 4
y = [2]
print(y,"Before function")
def scope_example():
y[0] = 25 #Here I'm not creating a y, but modifying y
print(y, "Inside function")
scope_example()
print(y, "After Function")
Things to remember about scope:
Knowing about convexity can come from:
Consider a function with two minimums:
In [19]:
def two_well(x):
if x < 0.125:
return (x + 2) ** 2
if x >= 0.125:
return (x - 2) ** 2 + 1
np_two_well = np.vectorize(two_well)
In [20]:
x = np.linspace(-4, 4, 1000)
plt.plot(x, np_two_well(x))
plt.show()
In [21]:
def obj(x):
return x**2
x = np.linspace(-1,1,100)
plt.plot(x, obj(x))
plt.show()
In [22]:
from scipy.optimize import minimize
minimize(obj, x0=3)
Out[22]:
fun: The value of the function at the minimum
hess_inv: The inverse of the the Hessian
jac: The value of the Jacobian
message: A string describing what happened
nfev: Number of function evaluations
nit: Number of iterations of the x point
njev: Number of times it computed the Jacobian
status: The single digit message (0 = success, != 0 some error)
success: Boolean indicating success
x: The minimum x
In [23]:
def f(x):
return ((x-4)**2)/2+((x-2)**2)/4
x = np.linspace(-10, 10, 100)
plt.plot(x, f(x))
plt.show()
In [24]:
result = minimize(f, x0=0)
print(f'the minimum value occurs at {result.x} and is {result.fun}')
The minimum is at $-\infty$!
In [25]:
result = minimize(lambda x: (x - 4) / 2, x0=0)
print(f'the minimum value occurs at {result.x} and is {result.fun}')
In [26]:
result = minimize(lambda r: 4 * (r**(-12) - r**(-6)), x0=0)
print(f'the minimum is at {result.x}')
Our initial value was not in the domain!
In [27]:
r = np.linspace(0.9, 2, 100)
y = 4 * (r**(-12) - r**(-6))
plt.plot(r, y)
plt.show()
In [28]:
result = minimize(lambda r: 4 * (r**(-12) - r**(-6)), x0=1)
print(f'the minimum is at {result.x} and its value is {result.fun}')
Maximize the following function:
$$ -\left[x - \cos(x)\right]^2 $$
In [30]:
#place - sign to make it a maxmimization problem
result = minimize(lambda x: (x - np.cos(x))**2, x0=1)
print(f'the maximum is at {result.x}')
Minimize the following:
$$ f(x, y) = \frac{(x - 4)^2} { 2 } + \frac{(y + 3)^2} { 5 } $$
In [31]:
result = minimize(lambda x: (x[0] - 4)**2 / 2 + (x[1] + 3)**2 / 5, x0=[0,0])
print(f'maximum occurs when x = {result.x[0]} and y = {result.x[1]}')
In [32]:
x = np.linspace(-1, 1, 1000)
plt.plot(x, 2 * x ** 3 - 0 * x **2 - x)
plt.show()
In [33]:
from scipy.optimize import minimize
minimize(lambda x: 2 * x ** 3 - 0 * x **2 - x, x0=0.05)
Out[33]:
In [34]:
x = np.linspace(-1, 1, 1000)
plt.plot(x, 2 * x ** 3 - 0 * x **2 - x)
plt.show()
In [35]:
from scipy.optimize import minimize
minimize(lambda x: 2 * x ** 3 - 0 * x **2 - x, x0=-0.5)
Out[35]:
In [36]:
from scipy.optimize import root
#rearranged equation so all terms on one side
result = root(lambda x: np.cos(x) + np.sin(x) - x, x0=1)
print(result)
The result type is like what we saw for minimize. Similar terms are here, including the root and the value of the function at the root. Notice it's not exactly $0$ at the root.
In [37]:
x = result.x
print(np.cos(x) + np.sin(x) - x)
Solve the following system of equations: $$ 3 x^2 - 2 y = 4$$ $$ x - 4 y ^ 2 = -2$$
In [38]:
def sys(v):
#I'm using v here to distinguish from the x in the equations
#extract x and y
x = v[0]
y = v[1]
#compute equations
eq1 = 3 * x ** 2 - 2 * y - 4
eq2 = x - 4 * y**2 + 2
#pack into list
sys_eq = [eq1, eq2]
return sys_eq
root(sys, x0=[1,1])
Out[38]:
So the answer is $x = 1.40,\: y = 0.921$
You should not ignore the information in the output of root. Imagine this small modification:
$$ 3 x^2 - 2 y^2 = 4$$$$ x^3 - 4 y = -2$$
In [39]:
def sys2(v):
#I'm using v here to distinguish from the x in the equations
#extract x and y
x = v[0]
y = v[1]
#compute equations
eq1 = 3 * x ** 2 - 2 * y**2 - 4
eq2 = x**3 - 4 * y + 2
#pack into list
sys_eq = [eq1, eq2]
return sys_eq
root(sys2, x0=[1,1])
Out[39]:
If you did not read the message or status, you might believe the method succeeded.
More compact version:
In [40]:
root(lambda x: [3 * x[0] ** 2 - 2 * x[1] - 4, x[0] - 4 * x[1] ** 2 + 2], x0=[1,1])
Out[40]:
In [41]:
root(lambda x: [3 * x[0] ** 2 - 2 * x[1] - 4, x[0] - 4 * x[1] ** 2 + 2], x0=[0, 0])
Out[41]: