In [30]:
import numpy as np
Create the following matrices using the given constraints. 4 Points each.
np.random.normal centered at 10 with standard deviation of 5, replace all negative values with 0 and then modify it so its rows sum to 1 and 0, use np.round to round to one decimal place.
In [2]:
import numpy as np
m = np.zeros((6, 12))
m[:,1] = 2**np.arange(6)
m[:,2] = 4
print(m)
In [23]:
data = np.random.normal(size=(10,10), scale=5, loc=10)
data[data < 0] = 0
for i in range(10):
data[i,:] /= np.sum(data[i,:])
print(np.round(data, 1))
In [22]:
data = np.zeros((9,9))
for i in range(9):
data[i,i] = 1
print(data)
Solve the following systems of equations. 4 Points each. Write out your answer in Markdown
In [24]:
import numpy.linalg as lin
A = np.array([[4, -2, 1], [2, -4, 1], [2, 1, 3]])
Ainv= lin.inv(A)
B = np.array([0, 1, 3])
x = Ainv.dot(B)
print(x)
In [27]:
A = np.array([[1, 1, 4], [4, -2, -1], [3, 1, 1]])
Ainv= lin.inv(A)
B = np.array([4, -1, 2])
x = Ainv.dot(B)
print(np.log(x[0]), x[1], x[2])
Calculate the eigenvalues and eigenvectors for the following matrices. Solve in Python and then write out the eigenvalues/eigenvectors in LaTeX. When writing decimals, only report two significant figures.
eigh over eig?
In [28]:
A = np.array([[4, 2, 2], [4, -8, 4], [8, 6, -10]])
print(lin.eig(A))
In [52]:
A = np.array([[1, 5, 1], [2, -1, 2], [1, 2, -3]])
print(lin.eig(A))
Using numpy, create a sum or difference of array slices that yields the requested quantity. Consider the following example:
To create this sequence: $$x_0 , x_2, x_4, \ldots $$
Use this slice
x[::2]
Use this particular array for this: x = np.arange(15), but use len(x) when you need to refer to the length of the array. 2 Points each.
In [20]:
x = np.arange(15)
x[1:] * x[:-1]
Out[20]:
In [21]:
x + x[::-1]
Out[21]:
In [22]:
x[:0:-1] - x[-2::-1]
Out[22]:
Given the following problems, what is the correct method to use? Do not solve the problems, just state the best method. 1 Point each
[4 Points] Compute $\int_{-\infty}^{\infty} x^2 e^{-x^2}\,dx$. Use np.inf to refer to infinity and use a lambda function. Make sure it's clear what is the value of the integral in your print.
[4 Points] Compute the numerical derivative of the following data:
x = [0, 1, 2, 3, 4, 6, 7, 9]
fx = [0.0, 0.84, 0.91, 0.14, -0.76, -0.28, 0.66, 0.41]
[2 Points] Compute the numerical derivative of the following data:
x2 = [0.0, 0.42, 0.83, 1.25, 1.67, 2.08, 2.5, 2.92, 3.33, 3.75, 4.17, 4.58, 5.0, 5.42, 5.83, 6.25, 6.67, 7.08, 7.5, 7.92, 8.33, 8.75, 9.17, 9.58, 10.0]
fx2 = [0.0, 0.4, 0.74, 0.95, 1.0, 0.87, 0.6, 0.22, -0.19, -0.57, -0.85, -0.99, -0.96, -0.76, -0.43, -0.03, 0.37, 0.72, 0.94, 1.0, 0.89, 0.62, 0.26, -0.16, -0.54]
[6 Points] Plot your data from 6.2 and 6.3 against $\cos(x)$, which is the correct derivative. Does the numerical derivative work even with the non-uniform coarse data in part 2?
[6 Points] Integrate the data from 6.2 and compare against the true integral $\int_0^{9} \sin(x)\,dx$. How accurate is it?
In [57]:
from scipy.integrate import quad
quad(lambda x: np.exp(-x**2) * x**2, -np.inf, np.inf)[0]
Out[57]:
In [58]:
x = np.array([0, 1, 2, 3, 4, 6, 7, 9])
fx = np.array([0.0, 0.84, 0.91, 0.14, -0.76, -0.28, 0.66, 0.41])
diff = (fx[1:] - fx[:-1]) / (x[1:] - x[:-1])
cdiff = (diff[1:] + diff[:-1]) / 2
print(cdiff)
In [59]:
x2 = np.array([0.0, 0.42, 0.83, 1.25, 1.67, 2.08, 2.5, 2.92, 3.33, 3.75, 4.17, 4.58, 5.0, 5.42, 5.83, 6.25, 6.67, 7.08, 7.5, 7.92, 8.33, 8.75, 9.17, 9.58, 10.0])
fx2 = np.array([0.0, 0.4, 0.74, 0.95, 1.0, 0.87, 0.6, 0.22, -0.19, -0.57, -0.85, -0.99, -0.96, -0.76, -0.43, -0.03, 0.37, 0.72, 0.94, 1.0, 0.89, 0.62, 0.26, -0.16, -0.54])
diff2 = (fx2[1:] - fx2[:-1]) / (x2[1:] - x2[:-1])
cdiff2 = (diff2[1:] + diff2[:-1]) / 2
print(cdiff2)
In [62]:
import matplotlib.pyplot as plt
%matplotlib inline
np.concatenate((fx[:1],cdiff,fx[-1:]))
Out[62]:
In [65]:
plt.plot(x[1:-1], cdiff, label='6.2 data')
plt.plot(x2[1:-1], cdiff2, label='6.3 data')
plt.plot(x2, np.cos(x2), label='True derivative')
plt.legend()
plt.show()
The numerical derivative is quite accurate, even though it only exists at 4 points. It may not be accurate though at the end points, which have no central difference derivative
In [78]:
print(np.sum((fx[1:] + fx[:-1]) / 2 * (x[1:] - x[:-1])), quad(np.sin, 0, 9)[0])
The answer is off by only 0.2, which is not bad since there are only 6 data points.