In [3]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$The true values of the model parameters are as follows:
In [4]:
a_true = 0.5
b_true = 2.0
c_true = -4.0
First, generate a dataset using this model using these parameters and the following characteristics:
size argument of np.random.normal).After you generate the data, make a plot of the raw data (use points).
In [15]:
t=np.linspace(-5,5,30)
def quadr(a,b,c):
x=np.linspace(-5,5,30)
return a*x**2+b*x+c
y=quadr(0.5,2.0,-4.0)+np.random.normal(0,2,30)
plt.scatter(t,y)
plt.tight_layout()
plt.xlabel('$x$')
plt.ylabel('$y(x)$')
plt.title('$y(x)$ vs $x$')
Out[15]:
In [ ]:
assert True # leave this cell for grading the raw data generation and plot
Now fit the model to the dataset to recover estimates for the model's parameters:
In [ ]:
# YOUR CODE HERE
raise NotImplementedError()
In [ ]:
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors