Homework 7 Key

CHE 116: Numerical Methods and Statistics

Prof. Andrew White

Version 1.1 (3/6/2015)



In [2]:
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.special import comb
import numpy as np
from scipy.optimize import *

1. Exercises (5 Points)

1.1 Answer


In [3]:
import scipy.optimize
def y(x):
    return ((x-4)**2)/2+((x-2)**2)/4
result = minimize(y, x0=0)
print result.x


[ 3.33333333]

1.2 Answer


In [4]:
from scipy.optimize import minimize
result = minimize(lambda x: 4.0*(((x)**(-12.0))-((x)**(-6))), x0=1, bounds=[(0.01, 10)])
print result.x


[ 1.12246203]

1.3 Answer (Not graded)


In [16]:
from scipy.optimize import root

def function(x):
    ans = np.zeros(2)
    ans[0] = 3 * x[0]**2.0 + 2.0 * x[1]**2 - 4.0
    ans[1] = 2 * x[0] ** 3 - x[1] + 2.0
    return ans
result = root(function, x0=[0,9])

print result


  status: 1
 success: True
     qtf: array([ -2.71572187e-09,   6.70328030e-10])
    nfev: 25
       r: array([ 11.43317756, -10.27793486,  -0.92679649])
     fun: array([ -3.11750625e-13,   6.81232848e-13])
       x: array([-0.7831945 ,  1.03918697])
 message: 'The solution converged.'
    fjac: array([[-0.62214603,  0.78290122],
       [-0.78290122, -0.62214603]])

In [6]:
result = scipy.optimize.minimize(lambda x:x*np.log(x), x0=0.5, bounds=[(0.01, 0.99)])
print result.x


[ 0.36787945]

2. Box Folding (5 Points)

$V=(4-2x)\times(3-2x)\times x$

So we minimize $-V$. Not we also must constrain $x$ to physically possible dimensions


In [7]:
from scipy.optimize import minimize

result = minimize(lambda x:  -(4 - 2 * x) * (3 - 2 * x) * x, x0=1)
print "The height is :{} cm ".format( result.x)


The height is :[ 0.56574152] cm 

In [8]:
x = np.linspace(0,1.5,100)
y = -(4 - 2 * x) * (3 - 2 * x) * x
plt.plot(x,y)
plt.vlines(result.x, -3.5,0)
plt.show()


3. Fitting a line (3 Points)


In [9]:
data_3_x = [0.0, 0.1,  0.2,  0.3,  0.4,  0.5,  0.6,  0.7,  0.8,  0.9,  1.0,  1.1,  1.2,  1.3,  1.4,  1.5,  1.6,  1.7,  1.8,  1.9,  2.0,  2.1,  2.2,  2.3,  2.4,  2.5,  2.6,  2.7,  2.8,  2.9,  3.0,  3.1,  3.2,  3.3,  3.4,  3.5,  3.6,  3.7,  3.8,  3.9,  4.0,  4.1,  4.2,  4.3,  4.4,  4.5,  4.6,  4.7,  4.8,  4.9,  5.0,  5.1,  5.2,  5.3,  5.4,  5.5,  5.6,  5.7,  5.8,  5.9,  6.0,  6.1,  6.2,  6.3,  6.4,  6.5,  6.6,  6.7,  6.8,  6.9,  7.0,  7.1,  7.2,  7.3,  7.4,  7.5,  7.6,  7.7,  7.8,  7.9,  8.0,  8.1,  8.2,  8.3,  8.4,  8.5,  8.6,  8.7,  8.8,  8.9,  9.0,  9.1,  9.2,  9.3,  9.4,  9.5,  9.6,  9.7,  9.8,  9.9]
data_3_y = [0.05, 0.45,  0.18,  -0.15,  -0.14,  -0.38,  -0.88,  -0.42,  -0.99,  -0.92,  -1.53,  -1.94,  -1.75,  -2.37,  -2.09,  -2.99,  -3.04,  -2.85,  -2.66,  -3.03,  -3.39,  -3.36,  -3.7,  -4.41,  -4.7,  -4.51,  -4.29,  -5.39,  -4.81,  -5.63,  -5.01,  -5.79,  -6.03,  -6.04,  -6.35,  -6.82,  -6.48,  -6.6,  -6.69,  -7.05,  -7.4,  -8.07,  -7.81,  -7.97,  -8.08,  -8.29,  -8.99,  -9.17,  -9.38,  -9.1,  -9.62,  -9.85,  -9.99,  -9.64,  -10.78,  -10.76,  -10.84,  -11.1,  -11.03,  -11.48,  -11.47,  -11.4,  -11.58,  -11.77,  -11.97,  -12.1,  -12.65,  -12.52,  -12.79,  -13.21,  -13.24,  -13.85,  -13.5,  -13.9,  -14.66,  -14.44,  -14.65,  -14.72,  -14.7,  -14.87,  -15.47,  -15.21,  -15.82,  -16.37,  -16.42,  -16.67,  -16.52,  -16.62,  -17.39,  -16.94,  -17.48,  -18.17,  -18.31,  -17.75,  -17.86,  -18.6,  -18.43,  -19.04,  -19.1,  -18.83]

def model(slope):
    yhat = slope * data_3_x

    return yhat
def obj_fxn(m):
    yhat = model(m)
    return np.sum( (data_3_y - yhat) ** 2)

result = minimize(obj_fxn, x0=25)
plt.plot(data_3_x, data_3_y)
plt.plot(data_3_x, result.x*data_3_x, '--')
plt.show()
print 'The best fit line is :y={}*x' .format(result.x[0])


The best fit line is :y=-1.91435389817*x

4. Fitting a Binomial Distribution to Data (2 Points)


In [14]:
from scipy.special import comb

bin_data = [4, 3,  6,  1,  2,  1,  1,  1,  3,  0,  4,  2,  4,  2,  2,  3,  2,  4,  4,  3]
N = 20

def bin_fit(p, data, N):
    fit = 0
    for di in data:
        fit += np.log( comb(N, di) * (1 - p)**(N - di) * p**di )
    return fit

#Example of how to use
print bin_fit(0.8, bin_data, N)


-452.321017887

In [15]:
def bin_fit(p, data, N):
    fit = 0
    for di in data:
        fit += np.log( comb(N, di) * (1 - p)**(N - di) * p**di )
    return fit

result = minimize(lambda x: -bin_fit(x, bin_data, N), x0=0.1, bounds=[(0.01,0.5)])

print result.x
x = np.linspace(0.1, 0.3, 100)
bin_fit_np = np.frompyfunc(lambda x: bin_fit(x, bin_data, N), 1, 1)
plt.vlines(result.x, -70, -35)
plt.plot(x, bin_fit_np(x))
plt.show()


[ 0.12999999]

5. Solution Reaction (5 Points)

The equilbrium is given by:

$$k(T) = \frac{[A][B]}{[AB]} = \frac{x\times x}{1.5 - x} \frac{1}{25 \textrm{gal}}$$

The factor of $1/25$ is because the equilibrium is for units of concentration.

The change in temperature can be found by rearranging the heat capacity definition:

$$ \Delta Q = c_p m \Delta T$$$$m = V\rho$$

The enthalpy of reaction can be used to compute the heat going into the tank.

$$ \Delta \hat{H} \times x = \Delta Q$$$$\Delta T = \frac{\Delta H x}{c_p V \rho}$$

Therefore, the temperature of the tank is given by:

$$T(x) = T + \frac{\Delta H x}{c_p V \rho}$$

Now we can rewrite the equilbirium constant in terms of $x$:

$$ k(x) = Ae^{\frac{-B}{RT}} + C\left(\frac{B}{RT(x)} - 0.5\right)^2 $$

Plugging the above equation into the equilbrium equation above gives us an equation with $x$ on both sides. This can be solved numerically by making it a root equation:

$$k(x) - \frac{x\times x}{1.5 - x} \frac{1}{25 \textrm{gal}} = 0 $$

Grading

  • Neglectected water heating -3
  • Didn't use water mass, instead used 1.5 or extent -1
  • Set up correctly, didn't optimize correctly -1

In [5]:
A = 10.
B = 500.
C = 10.**-3
R = 1.986

def k(T):
    return A * np.exp(-B / (R * T)) + C * (B / (R * T) - 0.5)**2

def delta_T(x):
    cp = 17.89 #BTU / (lb-mol R)
    density = 8.345 #lb / gal
    MW = 18 #lb-mol / mol
    delta_Q = 863.9 * x
    delta_T = delta_Q / (cp * density * 25.0 / MW)
    return delta_T
    #return x * 0

def eqn(x):
    return x ** 2 / (1.5 - x) / 25.0 - k(510 + delta_T(x))


print 


x = brentq(eqn, a=10**-10, b=1.5 - 10**-10)
print x
print 1.5 - x, "lb-mol of AB Remains" 
print 'Temperature is', 510 + delta_T(x)

x = np.arange(0.01, 1.5, 0.01)
f1 = k(510 + delta_T(x))
f2 = x ** 2 / (1.5 - x) / 25.0 

plt.plot(x, f1)
plt.plot(x, f2)
plt.show()


1.48562207929
0.0143779207147 lb-mol of AB Remains
Temperature is 516.189671977

6. 2D Minimization (5 Points)


In [7]:
import scipy.optimize

def obj(x):
    return np.cos(x[0] + 1) * np.sin(x[1]) + (x[0] + 2)**2
def constraint(x):
    return -1 * x[0] * x[1]

print constraint([1,-2])
print constraint([1,2])
constraints = {'type':'ineq', 'fun':constraint}

print scipy.optimize.minimize(obj, x0=[0,0], constraints=constraints)


2
-2
  status: 0
 success: True
    njev: 6
    nfev: 25
     fun: 2.5744519255533931e-08
       x: array([ -1.99999954e+00,   4.76479299e-08])
 message: 'Optimization terminated successfully.'
     jac: array([  9.80806521e-07,   5.40302695e-01,   0.00000000e+00])
     nit: 6

Why isn't it working?

We can see that this won't arrive at the correct function value given in the hint. You should also see that different starting conditions give different values. It is non-convex, so we must use basin hopping.


In [12]:
from math import pi

minimizer_args = {'constraints': constraints, 'bounds': [(-2*pi, 2*pi), (-2*pi, 2*pi)]}
print scipy.optimize.basinhopping(obj, x0=[0,0], minimizer_kwargs=minimizer_args, niter=1000)
scipy.optimize.basinhopping(obj, x0=[0,0], minimizer_kwargs=minimizer_args)


                  nfev: 26004
 minimization_failures: 0
                   fun: -0.67519533465082748
                     x: array([-1.68403752,  4.71239065])
               message: ['requested number of basinhopping iterations completed successfully']
                  njev: 6229
                   nit: 1000
Out[12]:
                  nfev: 2116
 minimization_failures: 0
                   fun: -1.2124802138186466e-07
                     x: array([ -1.99999805e+00,  -2.24414093e-07])
               message: ['requested number of basinhopping iterations completed successfully']
                  njev: 515
                   nit: 100

So we see that the final value is $x=-1.7$ and $y=4.7$

7. Sample Covariance (3 Points)

The correlation is high in magnitude (good correlation) and negative (since we saw in the plot the slope is negative). The covariance is negative between the two as expected. The variance in x is just related to the number of x points and then the covariance is approximately the variance in x times the slope. Key parts for grading are saying the correlation is good and negative for slope. In the covariance, a comment about it including the spread or variance of the data is required.


In [49]:
print np.corrcoef(data_3_x,data_3_y,ddof=1)

print np.cov(data_3_x,data_3_y,ddof=1)


[[ 1.         -0.99887012]
 [-0.99887012  1.        ]]
[[  8.41666667 -16.75073737]
 [-16.75073737  33.41255499]]

8. Functions of Distributions (1 Point)

This is applying a function on a continuous distribution. You can think of it as a change of random variables.

We have $g(t) = e^{0.9t}$ and $p(t) = \lambda e^{-\lambda t}$, where $\lambda = \frac{1}{10}$ if $t$ is in years.

The equation given lecture is:

$$p(Y = y) = \int_Q \delta(g(x) - y)p(x)\,dx$$

and then using our function and pdf:

$$p(N = n) = \int_0^\infty \delta \left( e^{0.9t} - n \right) \lambda e^{-\lambda t}\,dt$$

In [ ]: