In [1]:
from numpy import array, dot, outer, sqrt, matrix
from numpy.linalg import eig, eigvals
from matplotlib.pyplot import hist
In [2]:
%matplotlib inline
In [3]:
rv = array([1,2]) # a row vector
rv
Out[3]:
In [4]:
cv = array([[3],[4]]) # a column vector
cv
Out[4]:
In [5]:
dot(rv,cv)
Out[5]:
In [6]:
dot(cv,rv)
In [7]:
outer(rv,cv)
Out[7]:
In [8]:
outer(cv,rv)
Out[8]:
In [9]:
# Complex numbers in python have a j term:
a = 1+2j
In [10]:
v1 = array([1+2j, 3+2j, 5+1j, 4+0j])
The complex conjugate changes the sign of the imaginary part:
In [11]:
v1.conjugate()
Out[11]:
In [12]:
dot(v1.conjugate(),v1)
Out[12]:
In [13]:
# a two-dimensional array
m1 = array([[2,1],[2,1]])
m1
Out[13]:
In [14]:
# can find transpose with the T method:
m1.T
Out[14]:
In [15]:
# find the eigenvalues and eigenvectors of a matrix:
eig(m1)
Out[15]:
Can also use the matrix type which is like array but restricts to 2D. Also, matrix adds .H and .I methods for hermitian and inverse, respectively. For more information, see Stack Overflow question #4151128
In [16]:
m2 = matrix( [[2,1],[2,1]])
In [17]:
m2.H
Out[17]:
In [18]:
eig(m2)
Out[18]:
In [19]:
# use a question mark to get help on a command
eig?
In [20]:
M14 = array([[0,1],[-2,3]])
In [21]:
eig(M14)
Out[21]:
Interpret this result: the two eigenvalues are 1 and 2 the eigenvectors are strange decimals, but we can check them against the stated solution:
In [22]:
1/sqrt(2) # this is the value for both entries in the first eigenvector
Out[22]:
In [23]:
1/sqrt(5) # this is the first value in the second eigenvector
Out[23]:
In [24]:
2/sqrt(5) # this is the second value in the second eigenvector
Out[24]:
In [25]:
eigvals(M14)
Out[25]:
Signs are opposite compared to the book, but it turns out that (-) doesn't matter in the interpretation of eigenvectors: only "direction" matters (the relative size of the entries).
In [26]:
M16 = array([[0,-1j],[1j,0]])
In [27]:
evals, evecs = eig(M16)
In [28]:
evecs
Out[28]:
In [29]:
evecs[:,0]
Out[29]:
In [30]:
evecs[:,1]
Out[30]:
In [31]:
dot(evecs[:,0].conjugate(),evecs[:,1])
Out[31]:
In [32]:
from qutip import *
In [33]:
# Create a row vector:
qv = Qobj([[1,2]])
qv
Out[33]:
In [34]:
# Find the corresponding column vector
qv.dag()
Out[34]:
In [35]:
qv2 = Qobj([[1+2j,4-1j]])
qv2
Out[35]:
In [36]:
qv2.dag()
Out[36]:
In [37]:
qv2*qv2.dag() # inner product (dot product)
Out[37]:
In [38]:
qv2.dag()*qv2 # outer product
Out[38]:
In [39]:
qm = Qobj([[1,2],[2,1]])
qm
Out[39]:
In [40]:
qm.eigenenergies() # in quantum (as we will learn) eigenvalues often correspond to energy levels
Out[40]:
In [41]:
evals, evecs = qm.eigenstates()
In [42]:
evecs
Out[42]:
In [43]:
evecs[1]
Out[43]:
In [44]:
# Solution
data = [10,13,14,14,6,8,7,9,12,14,13,11,10,7,7]
# Fill in the hist() function:
n, bins, patches = hist(data, bins=9, range=(5,14))
Find the constant $c$ that normalizes the probability density: $$p(x) = \begin{cases} ce^{-ax}, & x \geq 0 \\[2ex] 0 & x < 0 \end{cases} $$
Hint: using sympy, we can calculate the relevant integral. The conds='none' asks the solver to ignore any strange conditions on the variables in the integral. This is fine for most of our integrals. Usually the variables are real and well-behaved numbers.
In [51]:
# Partial Solution:
from sympy import *
c,a,x = symbols("c a x")
Q.positive((c,a))
first = integrate( c*exp(-a*x) ,(x,0,oo),conds='none')
first
Out[51]:
In [55]:
check = integrate( a*exp(-a*x) ,(x,0,oo),conds='none')
check
Out[55]:
In [ ]: