May 4, 2015
Nearly all of physics is many-body physics at the most
microscopic level of understanding.
A theoretical understanding of the behavior of quantum mechanical systems with many interacting particles,
accompanied with experiments and simulations,
is a great challenge and provides fundamental insights into systems governed by quantum mechanical principles.
Many-body physics applications range from condensed matter physics (for example antiferromagnets, quantum liquids and solids, plasmas, metals, superconductors), nuclear and high energy physics (for example nuclear matter, finite nuclei, quark-gluon plasmas), dense matter astrophysics (for example neutron stars, white dwarfs), atoms and molecules (for example electron correlations, reactions, quantum chemistry), and elementary particles (for example quantum field theory, lattice gauge field models).
Microscopic many-body methodologies have attained a high degree of sophistication in the last decades, from both theoretical and applicative points of view. These developments have allowed researchers to adapt these technologies to a much wider range of physical systems, and to obtain quantitative descriptions of many observables.
Several of these methods have been developed independently of each other.
In some cases the same methods have been applied and studied in different
fields of research with little overlap and exchange of knowledge.
A classic example is the development of coupled-cluster approaches,
which originated from the nuclear many-body problem in the late fifties.
These methods were adapted and refined
to extremely high precision and predicability by quantum chemists over
the next four decades. The developments comprised, and still do,
both methodological innovations and the usage of advances in high-performance computing.
As time elapsed,
the overlap with the original nuclear many-body problem vanished gradually,
dwindling almost to a non-vanishing exchange of knowledge and methodologies.
These methods were however revived in nuclear physics towards the turn of the last century and offer now a viable approach
to ab initio We would like to come with a warning here, since it is our feeling that the
labelling ab initio attached to many titles of scientific articles nowadays, has reached an inflationary stage.
With ab initio we will mean solving Schroedinger's or Dirac's many-particle equations starting with the presumably best
inter-particle Hamiltonian, making no approximations. The particles that make up the many-body problem will be approximated
as the constituent ones.
In the nuclear many-body problem
these are various baryons such as protons, neutrons, isobars and hyperons and low-mass mesons. All many-body approaches discussed in this text involve
some approximations, and only few of them, if any, can reach the glorious stage of being truely ab initio.
Truely ab initio in nuclear physics means to solve the many-body problem starting with quarks and gluons.
nuclear structure studies of nuclei, spanning the whole range of stable closed-shell and several open-shell nuclei.
Many of the developments are also not
always readily accessible to a larger community of
researchers and especially to the young and uninitiated ones. A notorious exception is
again the quantum chemistry community, where highly effecient codes and methods are made available.
Numerical packages like
Gamess, Dalton and Gaussian provide a reliable computational
platform for many chemical applications and studies.
In other fields this is not always the case.
Nuclear physics, where we have our basic research experience, is such a field. This book aims therefore at giving you an overview of widely used quantum mechanical many-body methods. We want also to provide you with computational tools in order to enable you to perform studies of realistic systems in nuclear physics. The methods we discuss are however not limited to applications in nuclear physics, rather, with minor changes one can apply them to other fields, from quantum chemistry, via studies of atoms and molecules to materials science and solid state devices such as quantum dots.
Typical methods we will discuss are configuration interaction and large-scale diagonalization.methods (shell-model in nuclear physics), perturbative many-body methods, coupled cluster theory, Green's function approaches, various Monte Carlo methods, extensions to weakly bound systems and links from ab initio methods to density functional and mean field theories. The methods we have chosen are those which enjoy a great degree of popularity in low and intermediate energy nuclear physics, but due to both our limited knowledge and due to space considerations, some of these will be treated in rather cavalier way.
Focussing on methods and tools only is however not necessarily a recipe for success, as we all know a a bad workman quarrels with his tools. Our aim is to provide you with a didactical description of some of these powerful methods and tools with the purpose of allowing you to gain further insights and a possible working experience in modern many-body methods. There are several textbooks on the market which cover specific many-body methods and/or applications, but almost none of them presents an updated and comparative description of various many-body methods. The reason for choosing such an approach is that we want you, the reader, to gain the experience and form your own opinion on the suitability and applicability of specific methods. Too often you may have met many-body practitioners claiming 'my method is better than your method because...'. We wish to avoid such a pitfal, although ours, as any other approach, is tainted by our biases, preferences and obviously ignorance about all possible details.
We also want to shed light, in a hopefully unbiased way, on possible interplays and
differences between the various many-body methods with a focus
on the physics content.
We have therefore selected several physical systems,
from simple models to realistic cases, whose properties can be
described within the theoretical frameworks presented here.
In this text we will expose you to problems which tour many nuclear physics applications,
from the basic forces which bind nucleons together to compact objects such as neutron stars.
Our hope is that the material we have provided allows an eventual reader to assess
by himself/herself the pro and cons of the methods discussed.
Moreover, the text can be used and read as a large project. Each chapter ends with a specific project. The code you end up developing for that particular project, can in turn be reused in the next chapter and its pertaining project. To give you an example, in chapter xx you will end up constructing your own nucleon-nucleon interaction and solve its Schr\"odinger equation using the Lippman-Schwinger equation. This solution results in the so-called $T$-matrix which in turn can be related to the experimental phase shifts. The Lippman-Schwinger equation can easily be modified to account for a given nuclear medium, resulting in the so-called $G$-matrix. In chapter xx you end up computing such an effective interaction, using similarity transformations as well. With these codes, you are in turn in the position where you can define effective interactions for say coupled-cluster calculations (the topic of chapter xx) or shell-model effective interactions (topics for chapter xx and xx).
Our hope is that this will enable you to assess at a much deeper level the pros and cons of the various methods.
We believe firmly that knowledge of the strengths and weaknesses of a given method allows you to realize where
improvements can be made. The text has also a strong focus on computational issues, from how to build up large many-body codes
to parallelization and high-performance computing topics.
This texts spans some four hundred pages, and with the promised wide scope we aim at,
there have to be topics which will not be dealt with properly.
In particular we will not cover reaction theories.
For reaction theories we provide all the inputs needed to compute
onebody densities, spectroscopic factors and optical potentials.
For density functional theories we discuss how one can constrain the exchange correlations based on ab initio methods.
Furthermore, when it comes to the underlying nuclear forces, we will assume that various baryons and mesons are the
effective degrees of freedom, limiting ourselves to low and intermediate energy nuclear physics. A discussion of recent
advances in lattice quantumchromodynamics (QCD) and its link to effective field theories is also outside the scope
of this book, although we will refer to advances in these fields as well and link the construction of nuclear forces to
undergoing research in effective field theory and lattice QCD.
The material on infinite matter can be extended to finite temperatures, but we do not
adventure into the realm of heavy-ion collisions and the search of the holy grail of quark-gluon plasma.
We will throughout pay loyalty to a physical world governed by the degrees of freedom of baryons and mesons.
We apologize for these shortcomings, which are mainly due to
our lack of detailed research knowledge in the above fields. To be a jack of all trades leads normally to mastering none.
A final warning however. This text is not a text on many-body theories, rather it is a text on applications and implementations of many-body theories. This applies also to many of the algoritms we discuss. We do not go into details about for example Gaussian quadrature or the mathematical foundations of random walks and the Metropolis algorithm. We will often simply state results, but add enough references to tutorial texts so that you can look up the missing wonders of numerical mathematics yourself. Similarly, handling of angular momenta and their recouplings via $3j$, $6j$ or $9j$ symbols should not come as a bolt out from the blue. But again, don't despair, we'll guide you safely to the appropriate literature.
We have therefore taken the liberty to have certain expectations about you, the potential reader.
We assume that you are at least a graduate student who is embarking on studies in
nuclear physics and that
you have some familiarity with basic nuclear physics, angular momentum theory and many-body theory,
typically at the level of texts like those of Talmi, Fetter and Walecka, Blaizot and Ripka, Dickhoff and Van Neck or similar monographs (add refs later).
We will obviously state and repeat the basic rules and theorems, but will not go into derivations.
Some of the derivations will however be left to you via various exercises interspersed througout the text.
Friends/colleagues bla bla and sponsors bla bla. also add that errors, misunderstandings etc are mainly due to ourselves!!
To understand why matter is stable, and thereby shed light on the limits of
nuclear stability, is one of the
overarching aims and intellectual challenges
of basic research in nuclear physics. To relate the stability of matter
to the underlying fundamental forces and particles of nature as manifested in nuclear matter, is central
to present and planned rare isotope facilities.
Important properties of nuclear systems which can reveal information about these topics
are for example masses, and thereby binding energies, and density distributions of nuclei.
These are quantities which convey important information on
the shell structure of nuclei, with their
pertinent magic numbers and shell closures or the eventual disappearence of the latter
away from the valley of stability.
Neutron-rich nuclei are particularly interesting for the above endeavour. As a particular chain
of isotopes becomes more and more neutron rich, one reaches finally the limit of stability, the so-called
dripline, where one additional neutron makes the next isotopes unstable with respect
to the previous ones. The appearence or not of magic numbers and shell structures,
the formation of neutron skins and halos
can thence be probed via investigations of quantities like the binding energy
or the charge radii and neutron rms radii of neutron-rich nuclei.
These quantities have in turn important
consequences for theoretical models of nuclear structure and their application in astrophysics.
For example, the neutron radius of $\,{}^{208}\mbox{Pb}$, recently extracted from the PREX
experiment at Jefferson Laboratory can be used to constrain the equation of state of
neutron matter. A related quantity to the
neutron rms radius $r_n^{\mathrm{rms}}=\langle r^2\rangle_n^{1/2}$ is the neutron skin
$r_{\mathrm{skin}}=r_n^{\mathrm{rms}}-r_p^{\mathrm{rms}}$,
where $r_p^{\mathrm{rms}}$ is the corresponding proton rms radius.
There are several properties which relate the thickness of the neutron skin to quantities in nuclei and
nuclear matter, such as the symmetry energy at the saturation point for nuclear matter, the slope
of the equation of state for neutron matter
or the low-energy electric dipole strength due to the pigmy dipole resonance.
See Ref. http://iopscience.iop.org/1402-4896/2013/T152 for several interesting discussions of these topics.
Having access to precise measurements of masses, radii, and electromagnetic moments for a wide range of nuclei allows to study trends with varying neutron excess. A quantitative description of various experimental data with quantified uncertainty still remains a major challenge for nuclear structure theory. Global theoretical studies of isotopic chains, such as the Ca chain shown in the figure below here, make it possible to test systematic properties of effective interactions between nucleons. Such calculations also provide critical tests of limitations of many-body methods. As one approaches the particle emission thresholds, it becomes increasingly important to describe correctly the coupling to the continuum of decays and scattering channels. While the full treatment of antisymmetrization and short-range correlations has become routine in first principle approaches (to be defined later) to nuclear bound states, the many-body problem becomes more difficult when long-range correlations and continuum effects are considered.
Expected experimental information on the calcium isotopes that can be obtained at FRIB. The limits for detailed spectroscopic information are around $A\sim 60$.
The aim of this first section is to present some of the experimental data which can be used to extract information about correlations in nuclear systems. In particular, we will start with a theoretical analysis of a quantity called the separation energy for neutrons or protons. This quantity, to be discussed below, is defined as the difference between two binding energies (masses) of neighboring nuclei. As we will see from various figures below and exercises as well, the separation energies display a varying behavior as function of the number of neutrons or protons. These variations from one nucleus to another one, laid the foundation for the introduction of so-called magic numbers and a mean-field picture in order to describe nuclei theoretically.
With a mean- or average-field picture we mean that a given nucleon (either a proton or a neutron) moves in an average potential field which is set up by all other nucleons in the system. Consider for example a nucleus like $\,{}^{17}\mbox{O}$ with nine neutrons and eight protons. Many properties of this nucleus can be interpreted in terms of a picture where we can view it as one neutron on top of $\,{}^{16}\mbox{O}$. We infer from data and our theoretical interpretations that this additional neutron behaves almost as an individual neutron which sees an average interaction set up by the remaining 16 nucleons in $\,{}^{16}\mbox{O}$. A nucleus like $\,{}^{16}\mbox{O}$ is an example of what we in this course will denote as a good closed-shell nucleus. We will come back to what this means later.
Since we want to develop a theory capable of interpreting data in terms of our laws of motion and the pertinent forces, we can think of this neutron as a particle which moves in a potential field. We can hence attempt at solving our equations of motion (Schroedinger's equation in our case) for this system along the same lines as we did in atomic physics when we solved Schroedinger's equation for the hydrogen atom. We just need to define a model for our effective single-particle potential.
A simple potential model which enjoys quite some popularity in nuclear physics, is the three-dimensional harmonic oscillator. This potential model captures some of the physics of deeply bound single-particle states but fails in reproducing the less bound single-particle states. A parametrized, and more realistic, potential model which is widely used in nuclear physics, is the so-called Woods-Saxon potential. Both the harmonic oscillator and the Woods-Saxon potential models define computational problems that can easily be solved (see below), resulting (with the appropriate parameters) in a rather good reproduction of experiment for nuclei which can be approximated as one nucleon on top (or one nucleon removed) of a so-called closed-shell system.
To be able to interpret a nucleus in such a way requires at least that we are capable of parametrizing the abovementioned interactions in order to reproduce say the excitation spectrum of a nucleus like $\,{}^{17}\mbox{O}$.
With such a parametrized interaction we are able to solve Schroedinger's equation for the motion of one nucleon in a given field. A nucleus is however a true and complicated many-nucleon system, with extremely many degrees of freedom and complicated correlations, rendering the ideal solution of the many-nucleon Schroedinger equation an impossible enterprise. It is much easier to solve a single-particle problem with say a Woods-Saxon potential. Using such a potential hides however many of the complicated correlations and interactions which we see in nuclei. Such an effective single-nucleon potential is for example not capable of describing properties like the binding energy or the rms radius of a given nucleus.
An improvement to these simpler single-nucleon potentials is given by the Hartree-Fock method, where the variational principle is used to define a mean-field which the nucleons move in. There are many different classes of mean-field methods. An important difference between these methods and the simpler parametrized mean-field potentials like the harmonic oscillator and the Woods-Saxon potentials, is that the resulting equations contain information about the nuclear forces present in our models for solving Schroedinger's equation. Hartree-Fock and other mean-field methods like density functional theory form core topics in later lectures.
The aim of this section is to present some of the experimental data we will confront theory with. In particular, we will focus on separation and shell-gap energies and use these to build a picture of nuclei in terms of (from a philosophical stand we would call this a reductionist approach) a single-particle picture. The harmonic oscillator will serve as an excellent starting point in building nuclei from the bottom and up. Here we will neglect nuclear forces, these are introduced in the next section when we discuss the Hartree-Fock method.
The aim of this course is to develop our physics intuition of nuclear systems using a theoretical approach where we describe data in terms of the motion of individual nucleons and their mutual interactions.
How our theoretical pictures and models can be used to interpret data is in essence what this course is about. Our narrative will lead us along a path where we start with single-particle models and end with the theory of the nuclear shell-model. The latter will be used to understand and analyze excitation spectra and decay patterns of nuclei, linking our theoretical understanding with interpretations of experiment. The way we build up our theoretical descriptions and interpretations follows what we may call a standard reductionistic approach, that is we start with what we believe are our effective degrees of freedom (nucleons in our case) and interactions amongst these and solve thereafter the underlying equations of motions. This defines the nuclear many-body problem, and mean-field approaches like Hartree-Fock theory and the nuclear shell-model represent different approaches to our solutions of Schroedinger's equation.
We start our tour of experimental data and our interpretations by considering the chain of oxygen isotopes. In the exercises below you will be asked to perform similar analyses for other chains of isotopes.
The oxygen isotopes are the heaviest isotopes for which the drip line is well established. The drip line is defined as the point where adding one more nucleon leads to an unbound nucleus. Below we will see that we can define the dripline by studying the separation energy. Where the neutron (proton) separation energy changes sign as a function of the number of neutrons (protons) defines the neutron (proton) drip line.
The oxygen isotopes are simple enough to be described by some few selected single-particle degrees of freedom.
Two out of four stable even-even isotopes exhibit a doubly magic nature, namely $\,{}^{22}\mbox{O}$ ($Z=8$, $N=14$) and $\,{}^{24}\mbox{O}$ ($Z=8$, $N=16$).
The structure of $\,{}^{22}\mbox{O}$ and $\,{}^{24}\mbox{O}$ is assumed to be governed by the evolution of the $1s_{1/2}$ and $0d_{5/2}$ one-quasiparticle states.
The isotopes $\,{}^{25}\mbox{O}$, $\,{}^{26}\mbox{O}$, $\,{}^{27}\mbox{O}$ and $\,{}^{28}\mbox{O}$ are outside the drip line, since the $0d_{3/2}$ orbit is not bound.
Many experiments worldwide! These isotopes have been studied in series of recent experiments. Some of these experiments and theoretical interpretations are discussed in the following articles:
$\,{}^{24}\mbox{O}$ and lighter: C. R. Hoffman et al., Phys. Lett. B 672, 17 (2009); R. Kanungo et al., Phys. Rev. Lett.~102, 152501 (2009); C. R. Hoffman et al., Phys. Rev. C 83, 031303(R) (2011); Stanoiu et al., Phys. Rev. C 69, 034312 (2004)
$\,{}^{25}\mbox{O}$: C. R. Hoffman et al., Phys. Rev. Lett. 102,152501 (2009).
$\,{}^{26}\mbox{O}$: E. Lunderberg et al., Phys. Rev. Lett. 108, 142503 (2012).
$\,{}^{26}\mbox{O}$: Z. Kohley et al., Study of two-neutron radioactivity in the decay of 26O, Phys. Rev. Lett., 110, 152501 (2013).
Theory: Oxygen isotopes with three-body forces, Otsuka et al., Phys. Rev. Lett. 105, 032501 (2010). Hagen et al., Phys. Rev. Lett., 108, 242501 (2012).
Our first approach in analyzing data theoretically, is to see if we can use experimental information to
Extract information about a so-called single-particle behavior
And interpret such a behavior in terms of the underlying forces and microscopic physics
The next step is to see if we could use these interpretations to say something about shell closures and magic numbers. Since we focus on single-particle properties, a quantity we can extract from experiment is the separation energy for protons and neutrons. Before we proceed, we need to define quantities like masses and binding energies. Two excellent reviews on recent trends in the determination of nuclear masses can be found in http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.75.1021 and in http://iopscience.iop.org/1402-4896/2013/T152/014017/
A basic quantity which can be measured for the ground states of nuclei is the atomic mass $M(N, Z)$ of the neutral atom with atomic mass number $A$ and charge $Z$. The number of neutrons is $N$.
Atomic masses are usually tabulated in terms of the mass excess defined by
where $u$ is the Atomic Mass Unit
In this course we will mainly use data from the 2003 compilation of Audi, Wapstra and Thibault, see http://www.sciencedirect.com/science/journal/03759474/729/1.
The nucleon masses are
and
In the 2003 mass evaluation there are 2127 nuclei measured with an accuracy of 0.2 MeV or better, and 101 nuclei measured with an accuracy of greater than 0.2 MeV. For heavy nuclei one observes several chains of nuclei with a constant $N-Z$ value whose masses are obtained from the energy released in $\alpha$-decay.
The nuclear binding energy is defined as the energy required to break up a given nucleus into its constituent parts of $N$ neutrons and $Z$ protons. In terms of the atomic masses $M(N, Z)$ the binding energy is defined by
where $M_H$ is the mass of the hydrogen atom and $m_n$ is the mass of the neutron. In terms of the mass excess the binding energy is given by
where $\Delta_H c^2 = 7.2890$ MeV and $\Delta_n c^2 = 8.0713$ MeV.
The following python program reads in the experimental data on binding energies and, stored in the file bindingenergies.dat, plots them as function of the mass number $A$. One notices clearly a saturation of the binding energy per nucleon at $A\approx 56$.
In [1]:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
# Load in data file
data = np.loadtxt("datafiles/bindingenergies.dat")
# Make arrays containing x-axis and binding energies as function of A
x = data[:,2]
bexpt = data[:,3]
plt.plot(x, bexpt ,'ro')
plt.axis([0,270,-1, 10.0])
plt.xlabel(r'$A$')
plt.ylabel(r'Binding energies in [MeV]')
plt.legend(('Experiment'), loc='upper right')
plt.title(r'Binding energies from experiment')
plt.savefig('expbindingenergies.pdf')
plt.savefig('expbindingenergies.png')
plt.show()
A popular and physically intuitive model which can be used to parametrize the experimental binding energies as function of $A$, is the so-called the liquid drop model. The ansatz is based on the following expression
where $A$ stands for the number of nucleons and the $a_i$s are parameters which are determined by a fit to the experimental data.
To arrive at the above expression we have assumed that we can make the following assumptions:
There is a volume term $a_1A$ proportional with the number of nucleons (the energy is also an extensive quantity). When an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. This contribution is proportional to the volume.
There is a surface energy term $a_2A^{2/3}$. The assumption here is that a nucleon at the surface of a nucleus interacts with fewer other nucleons than one in the interior of the nucleus and hence its binding energy is less. This surface energy term takes that into account and is therefore negative and is proportional to the surface area.
There is a Coulomb energy term $a_3\frac{Z^2}{A^{1/3}}$. The electric repulsion between each pair of protons in a nucleus yields less binding.
There is an asymmetry term $a_4\frac{(N-Z)^2}{A}$. This term is associated with the Pauli exclusion principle and reflectd the fact that the proton-neutron interaction is more attractive on the average than the neutron-neutron and proton-proton interactions.
We could also add a so-called pairing term, which is a correction term that arises from the tendency of proton pairs and neutron pairs to occur. An even number of particles is more stable than an odd number. Performing a least-square fit to data, we obtain the following numerical values for the various constants
$a_1=15.49$ MeV
$a_2=17.23$ MeV
$a_3=0.697$ MeV
$a_4=22.6$ MeV
The python below here allows you to perform a fit of teh above parameters using nonlinear least squares curvefitting.
The following python program reads now in the experimental data on binding energies as well as the results from the above liquid drop model and plots these energies as function of the mass number $A$. One sees that for larger values of $A$, there is a better agreement with data.
In [1]:
import numpy as np
from matplotlib import pyplot as plt
# Load in data file
data = np.loadtxt("datafiles/bindingenergies.dat")
# Make arrays containing x-axis and binding energies as function of
x = data[:,2]
bexpt = data[:,3]
liquiddrop = data[:,4]
plt.plot(x, bexpt ,'b-o', x, liquiddrop, 'r-o')
plt.axis([0,270,-1, 10.0])
plt.xlabel(r'$A$')
plt.ylabel(r'Binding energies in [MeV]')
plt.legend(('Experiment','Liquid Drop'), loc='upper right')
plt.title(r'Binding energies from experiment and liquid drop')
plt.savefig('bindingenergies.pdf')
plt.savefig('bindingenergies.png')
plt.show()
This python program reads now in the experimental data on binding energies and performs a nonlinear least square fitting of the data. In the example here we use only the parameters $a_1$ and $a_2$, leaving it as an exercise to the reader to perform the fit for all four paramters. The results are plotted and compared with the experimental values. To read more about non-linear least square methods, see for example the text of M.J. Box, D. Davies and W.H. Swann, Non-Linear optimisation Techniques, Oliver & Boyd, 1969.
In [1]:
import numpy as np
from scipy.optimize import curve_fit
from matplotlib import pyplot as plt
# Load in data file
data = np.loadtxt("datafiles/bindingenergies.dat")
# Make arrays containing A on x-axis and binding energies
A = data[:,2]
bexpt = data[:,3]
# The function we want to fit to, only two terms here
def func(A,a1, a2):
return a1*A-a2*(A**(2.0/3.0))
# function to perform nonlinear least square with guess for a1 and a2
popt, pcov = curve_fit(func, A, bexpt, p0 = (16.0, 18.0))
a1 = popt[0]
a2 = popt[1]
liquiddrop = a1*A-a2*(A**(2.0/3.0))
plt.plot(A, bexpt ,'bo', A, liquiddrop, 'ro')
plt.axis([0,270,-1, 10.0])
plt.xlabel(r'$A$')
plt.ylabel(r'Binding energies in [MeV]')
plt.legend(('Experiment','Liquid Drop'), loc='upper right')
plt.title(r'Binding energies from experiment and liquid drop')
plt.savefig('bindingenergies.pdf')
plt.savefig('bindingenergies.png')
plt.show()
We are now interested in interpreting experimental binding energies in terms of a single-particle picture. In order to do so, we consider first energy conservation for nuclear transformations that include, for example, the fusion of two nuclei $a$ and $b$ into the combined system $c$
or the decay of nucleus $c$ into two other nuclei $a$ and $b$
In general we have the reactions
We require also that the number of protons and neutrons (the total number of nucleons) is conserved in the initial stage and final stage, unless we have processes which violate baryon conservation,
Spontaneous decay involves a single initial nuclear state and is allowed if $Q > 0$. In the decay, energy is released in the form of the kinetic energy of the final products. Reactions involving two initial nuclei are called endothermic (a net loss of energy) if $Q < 0$. The reactions are exothermic (a net release of energy) if $Q > 0$.
Let us study the Q values associated with the removal of one or two nucleons from a nucleus. These are conventionally defined in terms of the one-nucleon and two-nucleon separation energies. The neutron separation energy is defined as
and the proton separation energy reads
The two-neutron separation energy is defined as
and the two-proton separation energy is given by
Using say the neutron separation energies (alternatively the proton separation energies)
we can define the so-called energy gap for neutrons (or protons) as
or
This quantity can in turn be used to determine which nuclei are magic or not. For protons we would have
We leave it as an exercise to the reader to define and interpret the two-neutron or two-proton gaps.
The following python programs can now be used to plot the separation energies and the energy gaps for the oxygen isotopes. The following python code reads the separation energies from file for all oxygen isotopes from $A=13$ to $A=25$, The data are taken from the file snox.dat. This files contains the separation energies and the shell gap energies.
In [1]:
import numpy as np
from matplotlib import pyplot as plt
# Load in data file
data = np.loadtxt("datafiles/snox.dat")
# Make arrays containing x-axis and binding energies as function of
x = data[:,1]
y = data[:,2]
plt.plot(x, y,'b-+',markersize=6)
plt.axis([4,18,-1, 25.0])
plt.xlabel(r'Number of neutrons $N$',fontsize=20)
plt.ylabel(r'$S_n$ [MeV]',fontsize=20)
plt.legend(('Separation energies for oxygen isotpes'), loc='upper right')
plt.title(r'Separation energy for the oxygen isotopes')
plt.savefig('snoxygen.pdf')
plt.savefig('snoxygen.png')
plt.show()
Here we display the python program for plotting the corresponding results for shell gaps for the oyxgen isotopes.
In [1]:
import numpy as np
from matplotlib import pyplot as plt
# Load in data file
data = np.loadtxt("datafiles/snox.dat")
# Make arrays containing x-axis and binding energies as function of
x = data[:,1]
y = data[:,3]
plt.plot(x, y,'b-+',markersize=6)
plt.axis([4,18,-7, 12.0])
plt.xlabel(r'Number of neutrons $N$',fontsize=20)
plt.ylabel(r'$\Delta S_n$ [MeV]',fontsize=20)
plt.legend(('Shell gap energies for oxygen isotpes'), loc='upper right')
plt.title(r'Shell gap energies for the oxygen isotopes')
plt.savefig('gapoxygen.pdf')
plt.savefig('gapoxygen.png')
plt.show()
Since we will focus in the beginning on single-particle degrees of freedom and mean-field approaches before we start with nuclear forces and many-body approaches like the nuclear shell-model, there are some features to be noted
One may thus infer that intrinsic properties of nucleons in a nucleus are close to those of free nucleons.
In the discussion of the neutron separation energies for the oxygen isotopes, we note a clear staggering effect between odd and even isotopes with the even ones being more bound (larger separation energies). We will later link this to strong pairing correlations in nuclei.
The neutron separation energy becomes negative at $\,{}^{25}\mbox{O}$, making this nucleus unstable with respect to the emission of one neutron. A nucleus like $\,{}^{24}\mbox{O}$ is thus the last stable oxygen isotopes which has been observed. Oyxgen-26 has been found to be unbound with respect to $\,{}^{24}\mbox{O}$, see <ournals.aps.org/prl/abstract/10.1103/PhysRevLett.108.142503> .
We note also that there are large shell-gaps for some nuclei, meaning that more energy is needed to remove one nucleon. These gaps are used to define so-called magic numbers. For the oxygen isotopes we see a clear gap for $\,{}^{16}\mbox{O}$. We will interpret this gap as one of several experimental properties that define so-called magic numbers. In our discussion below we will make a first interpretation using single-particle states from the harmonic oscillator and the Woods-Saxon potential.
In the exercises below you will be asked to perform a similar analysis for other chains of isotopes and interpret the results.
The root-mean-square (rms) charge radius has been measured for the ground states of many nuclei. For a spherical charge density, $\rho(\boldsymbol{r})$, the mean-square radius is defined by
and the rms radius is the square root of this quantity denoted by
Radii for most stable nuclei have been deduced from electron scattering form factors and/or from the x-ray transition energies of muonic atoms. The relative radii for a series of isotopes can be extracted from the isotope shifts of atomic x-ray transitions. The rms radius for the nuclear point-proton density, $R_p$ is obtained from the rms charge radius by:
where
where
is the rms radius of the proton, $R^2_{\mathrm{on}} = 0.116(2)$ $\mbox{fm}^{2}$ is the mean-square radius of the neutron and $R^2_{\mathrm{rel}} = 0.033$ $\mbox{fm}^{2}$ is the relativistic Darwin-Foldy correction. There are also smaller nucleus-dependent relativistic spin-orbit and mesonic-exchange corrections that should be included.
We will now introduce the potential models we have discussex above, namely the harmonic oscillator and the Woods-Saxon potentials. In order to proceed, we need some definitions.
We define an operator as $\hat{O}$ throughout. Unless otherwise specified the total number of nucleons is always $A$ and $d$ is the dimension of the system. In nuclear physics we normally define the total number of particles to be $A=N+Z$, where $N$ is total number of neutrons and $Z$ the total number of protons. In case of other baryons such as isobars $\Delta$ or various hyperons such as $\Lambda$ or $\Sigma$, one needs to add their definitions. When we refer to a single neutron we will use the label $n$ and when we refer to a single proton we will use the label $p$. Unless otherwise specified, we will simply call these particles for nucleons.
The quantum numbers of a single-particle state in coordinate space are defined by the variables
where
with $d=1,2,3$ represents the spatial coordinates and $\sigma$ is the eigenspin of the particle. For fermions with eigenspin $1/2$ this means that
and the integral
and
Since we are dealing with protons and neutrons we need to add isospin as a new degree of freedom.
Including isospin $\tau$ we have
where
For nucleons, which are fermions with eigenspin $1/2$ and isospin $1/2$ this means that
and the integral
and
We will use the standard nuclear physics definition of isospin, resulting in $\tau_z=-1/2$ for protons and $\tau_z=1/2$ for neutrons.
The quantum mechanical wave function of a given state with quantum numbers $\lambda$ (encompassing all quantum numbers needed to specify the system), ignoring time, is
with $x_i=(\boldsymbol{r}_i,\sigma_i,\tau_i)$ and the projections of $\sigma_i$ and $\tau_i$ take the values $\{-1/2,+1/2\}$. We will hereafter always refer to $\Psi_{\lambda}$ as the exact wave function, and if the ground state is not degenerate we label it as
Since the solution $\Psi_{\lambda}$ seldomly can be found in closed form, approximations are sought. In this text we define an approximative wave function or an ansatz to the exact wave function as
with
being the ansatz for the ground state.
The wave function $\Psi_{\lambda}$ is sought in the Hilbert space of either symmetric or anti-symmetric $N$-body functions, namely
where the single-particle Hilbert space $\hat{H}_1$ is the space of square integrable functions over $\in {\mathbb{R}}^{d}\oplus (\sigma)\oplus (\tau)$ resulting in
Our Hamiltonian is invariant under the permutation (interchange) of two particles. Since we deal with fermions however, the total wave function is antisymmetric. Let $\hat{P}$ be an operator which interchanges two particles. Due to the symmetries we have ascribed to our Hamiltonian, this operator commutes with the total Hamiltonian,
meaning that $\Psi_{\lambda}(x_1, x_2, \dots , x_A)$ is an eigenfunction of $\hat{P}$ as well, that is
where $\beta$ is the eigenvalue of $\hat{P}$. We have introduced the suffix $ij$ in order to indicate that we permute particles $i$ and $j$. The Pauli principle tells us that the total wave function for a system of fermions has to be antisymmetric, resulting in the eigenvalue $\beta = -1$.
The Schrodinger equation reads
where the vector $x_i$ represents the coordinates (spatial, spin and isospin) of particle $i$, $\lambda$ stands for all the quantum numbers needed to classify a given $A$-particle state and $\Psi_{\lambda}$ is the pertaining eigenfunction. Throughout this course, $\Psi$ refers to the exact eigenfunction, unless otherwise stated.
We write the Hamilton operator, or Hamiltonian, in a generic way
where $\hat{T}$ represents the kinetic energy of the system
while the operator $\hat{V}$ for the potential energy is given by
Hereafter we use natural units, viz. $\hbar=c=e=1$, with $e$ the elementary charge and $c$ the speed of light. This means that momenta and masses have dimension energy.
The potential energy part includes also an external potential $\hat{u}_{\mathrm{ext}}(x_i)$.
In a non-relativistic approach to atomic physics, this external potential is given by the attraction an electron feels from the atomic nucleus. The latter being much heavier than the involved electrons, is often used to define a natural center of mass. In nuclear physics there is no such external potential. It is the nuclear force which results in binding in nuclear systems. In a non-relativistic framework, the nuclear force contains two-body, three-body and more complicated degrees of freedom. The potential energy reads then
Three-body and more complicated forces arise since we are dealing with protons and neutrons as effective degrees of freedom. We will come back to this topic later. Furthermore, in large parts of these lectures we will assume that the potential energy can be approximated by a two-body interaction only. Our Hamiltonian reads then
with
The interaction (or potential energy term) reads now
In nuclear physics the one-body part $u_{\mathrm{ext}}(x_i)$ is often approximated by a harmonic oscillator potential or a Woods-Saxon potential. However, this is not fully correct, because as we have discussed, nuclei are self-bound systems and there is no external confining potential. As we will see later, the $\hat{H}_0$ part of the hamiltonian cannot be used to compute the binding energy of a nucleus since it is not based on a model for the nuclear forces. That is, the binding energy is not the sum of the individual single-particle energies.
Why do we introduce the Hamiltonian in the form
There are many reasons for this. Let us look at some of them, using the harmonic oscillator in three dimensions as our starting point. For the harmonic oscillator we know that
where the eigenvalues are $\varepsilon_{\alpha}$ and the eigenfunctions are $\psi_{\alpha}(x_i)$. The subscript $\alpha$ represents quantum numbers like the orbital angular momentum $l_{\alpha}$, its projection $m_{l_{\alpha}}$ and the
principal quantum number $n_{\alpha}=0,1,2,\dots$.
The eigenvalues are
The following mathematical properties of the harmonic oscillator are handy.
First of all we have a complete basis of orthogonal eigenvectors. These have well-know expressions and can be easily be encoded.
With a complete basis $\psi_{\alpha}(x_i)$, we can construct a new basis $\phi_{\tau}(x_i)$ by expanding in terms of a harmonic oscillator basis, that is
where $C_{\tau\alpha}$ represents the overlap between the two basis sets.
The harmonic oscillator (a shifted one by a negative constant) provides also a very good approximation to most bound single-particle states. Furthermore, it serves as a starting point in building up our picture of nuclei, in particular how we define magic numbers and systems with one nucleon added to (or removed from) a closed-shell core nucleus. The figure here shows the various harmonic oscillator states, with those obtained with a Woods-Saxon potential as well, including a spin-orbit splitting (to be discussed below).
Single-particle spectrum and quantum numbers for a harmonic oscillator potential and a Woods-Saxon potential with and without a spin-orbit force.
In nuclear physics the one-body part $u_{\mathrm{ext}}(x_i)$ is often approximated by a harmonic oscillator potential. However, as we also noted with the Woods-Saxon potential there is no external confining potential in nuclei.
What many people do then, is to add and subtract a harmonic oscillator potential, with
where $\omega$ is the oscillator frequency. This leads to
with
Many practitioners use this as the standard Hamiltonian when doing nuclear structure calculations. This is ok if the number of nucleons is large, but still with this Hamiltonian, we do not obey translational invariance. How can we cure this?
In setting up a translationally invariant Hamiltonian
the following expressions are helpful.
The center-of-mass (CoM) momentum is
and we have that
meaning that
In a similar fashion we can define the CoM coordinate
which yields
If we then introduce the harmonic oscillator one-body Hamiltonian
with $\omega$ the oscillator frequency, we can rewrite the latter as
Alternatively, we could write it as
The center-of-mass term is defined as
The translationally invariant one- and two-body Hamiltonian reads for an A-nucleon system,
where $V_{ij}$ is the nucleon-nucleon interaction. Adding zero as here
we can then rewrite the Hamiltonian as
The Woods-Saxon potential is a mean field potential for the nucleons (protons and neutrons) inside an atomic nucleus. It represent an average potential that a given nucleon feels from the forces applied on each nucleon. The parametrization is
with $V_0\approx 50$ MeV representing the potential well depth, $a\approx 0.5$ fm length representing the "surface thickness" of the nucleus and $R=r_0A^{1/3}$, with $r_0=1.25$ fm and $A$ the number of nucleons. The value for $r_0$ can be extracted from a fit to data, see for example M. Kirson, http://www.sciencedirect.com/science/article/pii/S037594740600769X.
The following python code produces a plot of the Woods-Saxon potential with the above parameters.
In [1]:
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import rc, rcParams
import matplotlib.units as units
import matplotlib.ticker as ticker
rc('text',usetex=True)
rc('font',**{'family':'serif','serif':['Woods-Saxon potential']})
font = {'family' : 'serif',
'color' : 'darkred',
'weight' : 'normal',
'size' : 16,
}
v0 = 50
A = 100
a = 0.5
r0 = 1.25
R = r0*A**(0.3333)
x = np.linspace(0.0, 10.0)
y = -v0/(1+np.exp((x-R)/a))
plt.plot(x, y, 'b-')
plt.title(r'{\bf Woods-Saxon potential}', fontsize=20)
plt.text(3, -40, r'Parameters: $A=20$, $V_0=50$ [MeV]', fontdict=font)
plt.text(3, -44, r'$a=0.5$ [fm], $r_0=1.25$ [fm]', fontdict=font)
plt.xlabel(r'$r$ [fm]',fontsize=20)
plt.ylabel(r'$V(r)$ [MeV]',fontsize=20)
# Tweak spacing to prevent clipping of ylabel
plt.subplots_adjust(left=0.15)
plt.savefig('woodsaxon.pdf', format='pdf')
From the plot we notice that the potential
rapidly approaches zero as $r$ goes to infinity, reflecting the short-distance nature of the strong nuclear force.
For large $A$, it is approximately flat in the center.
Nucleons near the surface of the nucleus experience a large force towards the center.
We have introduced a single-particle Hamiltonian
with an external and central symmetric potential $u_{\mathrm{ext}}(x_i)$, which is often approximated by a harmonic oscillator potential or a Woods-Saxon potential. Being central symmetric leads to a degeneracy in energy which is not observed experimentally. We see this from for example our discussion of separation energies and magic numbers. There are, in addition to the assumed magic numbers from a harmonic oscillator basis of $2,8,20,40,70\dots$ magic numbers like $28$, $50$, $82$ and $126$.
To produce these additional numbers, we need to add a phenomenological spin-orbit force which lifts the degeneracy, that is
We have introduced a modified single-particle Hamiltonian
We can calculate the expectation value of the latter using the fact that
For a single-particle state with quantum numbers $nlj$ (we suppress $s$ and $m_j$), with $s=1/2$, we obtain the single-particle energies
with $\varepsilon_{nlj}^{(0)}$ being the single-particle energy obtained with $\hat{h}_0(x)$ and
The spin-orbit force gives thus an additional contribution to the energy
which lifts the degeneracy we have seen before in the harmonic oscillator or Woods-Saxon potentials. The value $C$ is the radial integral involving $\xi(\boldsymbol{r})$. Depending on the value of $j=l\pm 1/2$, we obtain
or
clearly lifting the degeneracy. Note well that till now we have simply postulated the spin-orbit force in ad hoc way. Later, we will see how this term arises from the two-nucleon force in a natural way.
With the spin-orbit force, we can modify our Woods-Saxon potential to
with
where we have
We can also add, in case of proton, a Coulomb potential. The Woods-Saxon potential has been widely used in parametrizations of effective single-particle potentials. However, as was the case with the harmonic oscillator, none of these potentials are linked directly to the nuclear forces. Our next step is to build a mean field based on the nucleon-nucleon interaction. This will lead us to our first and simplest many-body theory, Hartree-Fock theory.
The Woods-Saxon potential does allow for closed-form or analytical solutions of the eigenvalue problem
For the harmonic oscillator in three dimensions we have closed-form expressions for the energies and analytical solutions for the eigenstates, with the latter given by either Hermite polynomials (cartesian coordinates) or Laguerre polynomials (spherical coordinates).
To solve the above equation is however rather straightforward numerically.
We will illustrate the numerical solution of Schroedinger's equation by solving it for the harmonic oscillator in three dimensions. It is straightforward to change the harmonic oscillator potential with a Woods-Saxon potential, or any other type of potentials.
We are interested in the solution of the radial part of Schroedinger's equation for one nucleon. The angular momentum part is given by the so-called Spherical harmonics.
The radial equation reads
In our case $V(r)$ is the harmonic oscillator potential $(1/2)kr^2$ with $k=m\omega^2$ and $E$ is the energy of the harmonic oscillator in three dimensions. The oscillator frequency is $\omega$ and the energies are
with $n=0,1,2,\dots$ and $l=0,1,2,\dots$.
Since we have made a transformation to spherical coordinates it means that
$r\in [0,\infty)$.
The quantum number
$l$ is the orbital momentum of the nucleon. Then we substitute $R(r) = (1/r) u(r)$ and obtain
The boundary conditions are $u(0)=0$ and $u(\infty)=0$.
We introduce a dimensionless variable $\rho = (1/\alpha) r$ where $\alpha$ is a constant with dimension length and get
Let us specialize to $l=0$. Inserting $V(\rho) = (1/2) k \alpha^2\rho^2$ we end up with
We multiply thereafter with $2m\alpha^2/\hbar^2$ on both sides and obtain
We have thus
The constant $\alpha$ can now be fixed so that
or
Defining
we can rewrite Schroedinger's equation as
This is the first equation to solve numerically. In three dimensions the eigenvalues for $l=0$ are $\lambda_0=3,\lambda_1=7,\lambda_2=11,\dots .$
We use the standard expression for the second derivative of a function $u$
where $h$ is our step. Next we define minimum and maximum values for the variable $\rho$, $\rho_{\mathrm{min}}=0$ and $\rho_{\mathrm{max}}$, respectively. You need to check your results for the energies against different values $\rho_{\mathrm{max}}$, since we cannot set $\rho_{\mathrm{max}}=\infty$.
With a given number of steps, $n_{\mathrm{step}}$, we then define the step $h$ as
Define an arbitrary value of $\rho$ as
we can rewrite the Schroedinger equation for $\rho_i$ as
or in a more compact way
where $V_i=\rho_i^2$ is the harmonic oscillator potential.
Define first the diagonal matrix element
and the non-diagonal matrix element
In this case the non-diagonal matrix elements are given by a mere constant. All non-diagonal matrix elements are equal.
With these definitions the Schroedinger equation takes the following form
where $u_i$ is unknown. We can write the latter equation as a matrix eigenvalue problem
or if we wish to be more detailed, we can write the tridiagonal matrix as
Recall that the solutions are known via the boundary conditions at $i=n_{\mathrm{step}}$ and at the other end point, that is for $\rho_0$. The solution is zero in both cases.
The following python program is an example of how one can obtain the eigenvalues for a single-nucleon moving in a harmonic oscillator potential. It is rather easy to change the onebody-potential with ones like a Woods-Saxon potential.
The c++ and Fortran versions of this program can be found at https://github.com/NuclearStructure/PHY981/tree/master/doc/pub/spdata/programs.
The c++ program uses the c++ library armadillo, see http://arma.sourceforge.net/.
To install armadillo see the guidelines at http://www.uio.no/studier/emner/matnat/fys/FYS4411/v14/guides/installing-armadillo/.
For mac users I recommend using brew, see http://brew.sh/.
If you use ipython notebook, you can run c++ programs following the instructions at http://nbviewer.ipython.org/github/dragly/cppmagic/blob/master/example.ipynb
The code sets up the Hamiltonian matrix by defining the the minimun and maximum values of $r$ with a maximum value of integration points. These are set in the initialization function. It plots the eigenfunctions of the three lowest eigenstates.
In [1]:
#Program which solves the one-particle Schrodinger equation
#for a potential specified in function
#potential(). This example is for the harmonic oscillator in 3d
from matplotlib import pyplot as plt
import numpy as np
#Function for initialization of parameters
def initialize():
RMin = 0.0
RMax = 10.0
lOrbital = 0
Dim = 400
return RMin, RMax, lOrbital, Dim
# Here we set up the harmonic oscillator potential
def potential(r):
return r*r
#Get the boundary, orbital momentum and number of integration points
RMin, RMax, lOrbital, Dim = initialize()
#Initialize constants
Step = RMax/(Dim+1)
DiagConst = 2.0 / (Step*Step)
NondiagConst = -1.0 / (Step*Step)
OrbitalFactor = lOrbital * (lOrbital + 1.0)
#Calculate array of potential values
v = np.zeros(Dim)
r = np.linspace(RMin,RMax,Dim)
for i in xrange(Dim):
r[i] = RMin + (i+1) * Step;
v[i] = potential(r[i]) + OrbitalFactor/(r[i]*r[i]);
#Setting up tridiagonal matrix and find eigenvectors and eigenvalues
Hamiltonian = np.zeros((Dim,Dim))
Hamiltonian[0,0] = DiagConst + v[0];
Hamiltonian[0,1] = NondiagConst;
for i in xrange(1,Dim-1):
Hamiltonian[i,i-1] = NondiagConst;
Hamiltonian[i,i] = DiagConst + v[i];
Hamiltonian[i,i+1] = NondiagConst;
Hamiltonian[Dim-1,Dim-2] = NondiagConst;
Hamiltonian[Dim-1,Dim-1] = DiagConst + v[Dim-1];
# diagonalize and obtain eigenvalues, not necessarily sorted
EigValues, EigVectors = np.linalg.eig(Hamiltonian)
# sort eigenvectors and eigenvalues
permute = EigValues.argsort()
EigValues = EigValues[permute]
EigVectors = EigVectors[:,permute]
# now plot the results for the three lowest lying eigenstates
for i in xrange(3):
print EigValues[i]
FirstEigvector = EigVectors[:,0]
SecondEigvector = EigVectors[:,1]
ThirdEigvector = EigVectors[:,2]
plt.plot(r, FirstEigvector**2 ,'b-',r, SecondEigvector**2 ,'g-',r, ThirdEigvector**2 ,'r-')
plt.axis([0,4.6,0.0, 0.025])
plt.xlabel(r'$r$')
plt.ylabel(r'Radial probability $r^2|R(r)|^2$')
plt.title(r'Radial probability distributions for three lowest-lying states')
plt.savefig('eigenvector.pdf')
plt.savefig('eigenvector.png')
plt.show()
The data on binding energies can be found in the file bedata.dat at the github address of the course, see https://github.com/NuclearStructure/PHY981/tree/master/doc/pub/spdata/programs
a) Write a small program which reads in the proton and neutron numbers and the binding energies and make a plot of all neutron separation energies for the chain of oxygen (O), calcium (Ca), nickel (Ni), tin (Sn) and lead (Pb) isotopes, that is you need to plot
Comment your results.
b) In the same figures, you should also include the liquid drop model results of Eq. (2.17) of Alex Brown's text, namely
with $\alpha_1=15.49$ MeV, $\alpha_2=17.23$ MeV, $\alpha_3=0.697$ MeV and $\alpha_4=22.6$ MeV. Again, comment your results.
c) Make also a plot of the binding energies as function of $A$ using the data in the file on binding energies and the above liquid drop model. Make a figure similar to figure 2.5 of Alex Brown where you set the various parameters $\alpha_i=0$. Comment your results.
d) Use the liquid drop model to find the neutron drip lines for Z values up to 120. Analyze then the fluorine isotopes and find, where available the corresponding experimental data, and compare the liquid drop model predicition with experiment. Comment your results. A program example in C++ and the input data file bedata.dat can be found found at the github repository for the course
The program for finding the eigenvalues of the harmonic oscillator are in the github folder https://github.com/NuclearStructure/PHY981/tree/master/doc/pub/spdata/programs.
You can use this program to solve the exercises below, or write your own using your preferred programming language, be it python, fortran or c++ or other languages. Here I will mainly provide fortran, python and c++.
a) Compute the eigenvalues of the five lowest states with a given orbital momentum and oscillator frequency $\omega$. Study these results as functions of the the maximum value of $r$ and the number of integration points $n$, starting with $r_{\mathrm{max}}=10$. Compare the computed ones with the exact values and comment your results.
b) Plot thereafter the eigenfunctions as functions of $r$ for the lowest-lying state with a given orbital momentum $l$.
c) Replace thereafter the harmonic oscillator potential with a Woods-Saxon potential using the parameters discussed above. Compute the lowest five eigenvalues and plot the eigenfunction of the lowest-lying state. How does this compare with the harmonic oscillator? Comment your results and possible implications for nuclear physics studies.
Consider the Slater determinant
where $P$ is an operator which permutes the coordinates of two particles. We have assumed here that the number of particles is the same as the number of available single-particle states, represented by the greek letters $\alpha_{1}\alpha_{2}\dots\alpha_{N}$.
a) Write out $\Phi^{AS}$ for $N=3$.
b) Show that
c) Define a general onebody operator $\hat{F} = \sum_{i}^N\hat{f}(x_{i})$ and a general twobody operator $\hat{G}=\sum_{i>j}^N\hat{g}(x_{i},x_{j})$ with $g$ being invariant under the interchange of the coordinates of particles $i$ and $j$. Calculate the matrix elements for a two-particle Slater determinant
and
Explain the short-hand notation for the Slater determinant. Which properties do you expect these operators to have in addition to an eventual permutation symmetry?
We will now consider a simple three-level problem, depicted in the figure below. This is our first and very simple model of a possible many-nucleon (or just fermion) problem and the shell-model. The single-particle states are labelled by the quantum number $p$ and can accomodate up to two single particles, viz., every single-particle state is doubly degenerate (you could think of this as one state having spin up and the other spin down). We let the spacing between the doubly degenerate single-particle states be constant, with value $d$. The first state has energy $d$. There are only three available single-particle states, $p=1$, $p=2$ and $p=3$, as illustrated in the figure.
a) How many two-particle Slater determinants can we construct in this space?
We limit ourselves to a system with only the two lowest single-particle orbits and two particles, $p=1$ and $p=2$. We assume that we can write the Hamiltonian as
and that the onebody part of the Hamiltonian with single-particle operator $\hat{h}_0$ has the property
where we have added a spin quantum number $\sigma$. We assume also that the only two-particle states that can exist are those where two particles are in the same state $p$, as shown by the two possibilities to the left in the figure. The two-particle matrix elements of $\hat{H}_I$ have all a constant value, $-g$.
b) Show then that the Hamiltonian matrix can be written as
c) Find the eigenvalues and eigenvectors. What is mixing of the state with two particles in $p=2$ to the wave function with two-particles in $p=1$? Discuss your results in terms of a linear combination of Slater determinants.
d)
Add the possibility that the two particles can be in the state with $p=3$ as well and find the Hamiltonian matrix, the eigenvalues and the eigenvectors. We still insist that we only have two-particle states composed of two particles being in the same level $p$. You can diagonalize numerically your $3\times 3$ matrix.
This simple model catches several birds with a stone. It demonstrates how we can build linear combinations
of Slater determinants and interpret these as different admixtures to a given state. It represents also the way we are going to interpret these contributions. The two-particle states above $p=1$ will be interpreted as
excitations from the ground state configuration, $p=1$ here. The reliability of this ansatz for the ground state,
with two particles in $p=1$,
depends on the strength of the interaction $g$ and the single-particle spacing $d$.
Finally, this model is a simple schematic ansatz for studies of pairing correlations and thereby superfluidity/superconductivity
in fermionic systems.
Schematic plot of the possible single-particle levels with double degeneracy. The filled circles indicate occupied particle states. The spacing between each level $p$ is constant in this picture. We show some possible two-particle states.
Hartree-Fock (HF) theory is an algorithm for finding an approximative expression for the ground state of a given Hamiltonian. The basic ingredients are
with the Hartree-Fock Hamiltonian defined as
The term $\hat{u}^{\mathrm{HF}}$ is a single-particle potential to be determined by the HF algorithm.
that is to find a local minimum with a Slater determinant $\Phi_0$ being the ansatz for the ground state.
We will show that the Hartree-Fock Hamiltonian $\hat{h}^{\mathrm{HF}}$ equals our definition of the operator $\hat{f}$ discussed in connection with the new definition of the normal-ordered Hamiltonian (see later lectures), that is we have, for a specific matrix element
meaning that
The so-called Hartree-Fock potential $\hat{u}^{\mathrm{HF}}$ brings an explicit medium dependence due to the summation over all single-particle states below the Fermi level $F$. It brings also in an explicit dependence on the two-body interaction (in nuclear physics we can also have complicated three- or higher-body forces). The two-body interaction, with its contribution from the other bystanding fermions, creates an effective mean field in which a given fermion moves, in addition to the external potential $\hat{u}_{\mathrm{ext}}$ which confines the motion of the fermion. For systems like nuclei, there is no external confining potential. Nuclei are examples of self-bound systems, where the binding arises due to the intrinsic nature of the strong force. For nuclear systems thus, there would be no external one-body potential in the Hartree-Fock Hamiltonian.
The calculus of variations involves problems where the quantity to be minimized or maximized is an integral.
In the general case we have an integral of the type
where $E$ is the quantity which is sought minimized or maximized. The problem is that although $f$ is a function of the variables $\Phi$, $\partial \Phi/\partial x$ and $x$, the exact dependence of $\Phi$ on $x$ is not known. This means again that even though the integral has fixed limits $a$ and $b$, the path of integration is not known. In our case the unknown quantities are the single-particle wave functions and we wish to choose an integration path which makes the functional $E[\Phi]$ stationary. This means that we want to find minima, or maxima or saddle points. In physics we search normally for minima. Our task is therefore to find the minimum of $E[\Phi]$ so that its variation $\delta E$ is zero subject to specific constraints. In our case the constraints appear as the integral which expresses the orthogonality of the single-particle wave functions. The constraints can be treated via the technique of Lagrangian multipliers
Let us specialize to the expectation value of the energy for one particle in three-dimensions. This expectation value reads
with the constraint
and a Hamiltonian
We will, for the sake of notational convenience, skip the variables $x,y,z$ below, and write for example $V(x,y,z)=V$.
The integral involving the kinetic energy can be written as, with the function $\psi$ vanishing strongly for large values of $x,y,z$ (given here by the limits $a$ and $b$),
We will drop the limits $a$ and $b$ in the remaining discussion. Inserting this expression into the expectation value for the energy and taking the variational minimum we obtain
The constraint appears in integral form as
and multiplying with a Lagrangian multiplier $\lambda$ and taking the variational minimum we obtain the final variational equation
We introduce the function $f$
where we have skipped the dependence on $x,y,z$ and introduced the shorthand $\psi_x$, $\psi_y$ and $\psi_z$ for the various derivatives.
For $\psi^*$ the Euler-Lagrange equations yield
which results in
We can then identify the Lagrangian multiplier as the energy of the system. The last equation is nothing but the standard Schroedinger equation and the variational approach discussed here provides a powerful method for obtaining approximate solutions of the wave function.
Before we proceed we need some definitions. We will assume that the interacting part of the Hamiltonian can be approximated by a two-body interaction. This means that our Hamiltonian is written as the sum of some onebody part and a twobody part
with
The onebody part $u_{\mathrm{ext}}(x_i)$ is normally approximated by a harmonic oscillator potential or the Coulomb interaction an electron feels from the nucleus. However, other potentials are fully possible, such as one derived from the self-consistent solution of the Hartree-Fock equations to be discussed here.
Our Hamiltonian is invariant under the permutation (interchange) of two particles. Since we deal with fermions however, the total wave function is antisymmetric. Let $\hat{P}$ be an operator which interchanges two particles. Due to the symmetries we have ascribed to our Hamiltonian, this operator commutes with the total Hamiltonian,
meaning that $\Psi_{\lambda}(x_1, x_2, \dots , x_A)$ is an eigenfunction of $\hat{P}$ as well, that is
where $\beta$ is the eigenvalue of $\hat{P}$. We have introduced the suffix $ij$ in order to indicate that we permute particles $i$ and $j$. The Pauli principle tells us that the total wave function for a system of fermions has to be antisymmetric, resulting in the eigenvalue $\beta = -1$.
In our case we assume that we can approximate the exact eigenfunction with a Slater determinant
where $x_i$ stand for the coordinates and spin values of a particle $i$ and $\alpha,\beta,\dots, \gamma$ are quantum numbers needed to describe remaining quantum numbers.
The single-particle function $\psi_{\alpha}(x_i)$ are eigenfunctions of the onebody Hamiltonian $h_i$, that is
with eigenvalues
The energies $\varepsilon_{\alpha}$ are the so-called non-interacting single-particle energies, or unperturbed energies. The total energy is in this case the sum over all single-particle energies, if no two-body or more complicated many-body interactions are present.
Let us denote the ground state energy by $E_0$. According to the variational principle we have
where $\Phi$ is a trial function which we assume to be normalized
where we have used the shorthand $d\mathbf{\tau}=dx_1dr_2\dots dr_A$.
In the Hartree-Fock method the trial function is the Slater determinant of Eq. (11) which can be rewritten as
where we have introduced the antisymmetrization operator $\hat{A}$ defined by the summation over all possible permutations of two particles.
It is defined as
with $p$ standing for the number of permutations. We have introduced for later use the so-called Hartree-function, defined by the simple product of all possible single-particle functions
Both $\hat{H}_0$ and $\hat{H}_I$ are invariant under all possible permutations of any two particles and hence commute with $\hat{A}$
Furthermore, $\hat{A}$ satisfies
since every permutation of the Slater determinant reproduces it.
The expectation value of $\hat{H}_0$
is readily reduced to
The integral vanishes if two or more particles are permuted in only one of the Hartree-functions $\Phi_H$ because the individual single-particle wave functions are orthogonal. We obtain then
Orthogonality of the single-particle functions allows us to further simplify the integral, and we arrive at the following expression for the expectation values of the sum of one-body Hamiltonians
We introduce the following shorthand for the above integral
and rewrite Eq. (15) as
The expectation value of the two-body part of the Hamiltonian is obtained in a similar manner. We have
which reduces to
by following the same arguments as for the one-body Hamiltonian.
Because of the dependence on the inter-particle distance $r_{ij}$, permutations of any two particles no longer vanish, and we get
where $P_{ij}$ is the permutation operator that interchanges particle $i$ and particle $j$. Again we use the assumption that the single-particle wave functions are orthogonal.
We obtain
The first term is the so-called direct term. It is frequently also called the Hartree term, while the second is due to the Pauli principle and is called the exchange term or just the Fock term. The factor $1/2$ is introduced because we now run over all pairs twice.
The last equation allows us to introduce some further definitions.
The single-particle wave functions $\psi_{\mu}(x)$, defined by the quantum numbers $\mu$ and $x$
are defined as the overlap
We introduce the following shorthands for the above two integrals
and
where $\Phi$ is a trial function which we assume to be normalized
where we have used the shorthand $d\mathbf{\tau}=dx_1dx_2\dots dx_A$.
In the Hartree-Fock method the trial function is a Slater determinant which can be rewritten as
where we have introduced the anti-symmetrization operator $\hat{A}$ defined by the summation over all possible permutations p of two fermions. It is defined as
with the the Hartree-function given by the simple product of all possible single-particle function
Our functional is written as
The more compact version reads
Since the interaction is invariant under the interchange of two particles it means for example that we have
or in the more general case
The direct and exchange matrix elements can be brought together if we define the antisymmetrized matrix element
or for a general matrix element
It has the symmetry property
The antisymmetric matrix element is also hermitian, implying
With these notations we rewrite the Hartree-Fock functional as
Adding the contribution from the one-body operator $\hat{H}_0$ to (18) we obtain the energy functional
In our coordinate space derivations below we will spell out the Hartree-Fock equations in terms of their integrals.
If we generalize the Euler-Lagrange equations to more variables and introduce $N^2$ Lagrange multipliers which we denote by $\epsilon_{\mu\nu}$, we can write the variational equation for the functional of $E$
For the orthogonal wave functions $\psi_{i}$ this reduces to
Variation with respect to the single-particle wave functions $\psi_{\mu}$ yields then
1 7 6
< < < ! ! M A T H _ B L O C K
Although the variations $\delta\psi$ and $\delta\psi^*$ are not independent, they may in fact be treated as such, so that the terms dependent on either $\delta\psi$ and $\delta\psi^*$ individually may be set equal to zero. To see this, simply replace the arbitrary variation $\delta\psi$ by $i\delta\psi$, so that $\delta\psi^*$ is replaced by $-i\delta\psi^*$, and combine the two equations. We thus arrive at the Hartree-Fock equations
Notice that the integration $\int dx_j$ implies an integration over the spatial coordinates $\mathbf{r_j}$ and a summation over the spin-coordinate of fermion $j$. We note that the factor of $1/2$ in front of the sum involving the two-body interaction, has been removed. This is due to the fact that we need to vary both $\delta\psi_{\mu}^*$ and $\delta\psi_{\nu}^*$. Using the symmetry properties of the two-body interaction and interchanging $\mu$ and $\nu$ as summation indices, we obtain two identical terms.
The two first terms in the last equation are the one-body kinetic energy and the electron-nucleus potential. The third or direct term is the averaged electronic repulsion of the other electrons. As written, the term includes the self-interaction of electrons when $\mu=\nu$. The self-interaction is cancelled in the fourth term, or the exchange term. The exchange term results from our inclusion of the Pauli principle and the assumed determinantal form of the wave-function. Equation (20), in addition to the kinetic energy and the attraction from the atomic nucleus that confines the motion of a single electron, represents now the motion of a single-particle modified by the two-body interaction. The additional contribution to the Schroedinger equation due to the two-body interaction, represents a mean field set up by all the other bystanding electrons, the latter given by the sum over all single-particle states occupied by $N$ electrons.
The Hartree-Fock equation is an example of an integro-differential equation. These equations involve repeated calculations of integrals, in addition to the solution of a set of coupled differential equations. The Hartree-Fock equations can also be rewritten in terms of an eigenvalue problem. The solution of an eigenvalue problem represents often a more practical algorithm and the solution of coupled integro-differential equations. This alternative derivation of the Hartree-Fock equations is given below.
A theoretically convenient form of the Hartree-Fock equation is to regard the direct and exchange operator defined through
and
respectively.
The function $g(x_i)$ is an arbitrary function, and by the substitution $g(x_i) = \psi_{\nu}(x_i)$ we get
We may then rewrite the Hartree-Fock equations as
with
and where $\hat{h}_0(i)$ is the one-body part. The latter is normally chosen as a part which yields solutions in closed form. The harmonic oscilltor is a classical problem thereof. We normally rewrite the last equation as
Another possibility is to expand the single-particle functions in a known basis and vary the coefficients, that is, the new single-particle wave function is written as a linear expansion in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc). We define our new Hartree-Fock single-particle basis by performing a unitary transformation on our previous basis (labelled with greek indices) as
In this case we vary the coefficients $C_{p\lambda}$. If the basis has infinitely many solutions, we need to truncate the above sum. We assume that the basis $\phi_{\lambda}$ is orthogonal. A unitary transformation keeps the orthogonality, as discussed in exercise 1 below.
It is normal to choose a single-particle basis defined as the eigenfunctions of parts of the full Hamiltonian. The typical situation consists of the solutions of the one-body part of the Hamiltonian, that is we have
The single-particle wave functions $\phi_{\lambda}({\bf r})$, defined by the quantum numbers $\lambda$ and ${\bf r}$ are defined as the overlap
In our discussions hereafter we will use our definitions of single-particle states above and below the Fermi ($F$) level given by the labels $ijkl\dots \le F$ for so-called single-hole states and $abcd\dots > F$ for so-called particle states. For general single-particle states we employ the labels $pqrs\dots$.
In Eq. (19), restated here
we found the expression for the energy functional in terms of the basis function $\phi_{\lambda}({\bf r})$. We then varied the above energy functional with respect to the basis functions $|\mu \rangle$. Now we are interested in defining a new basis defined in terms of a chosen basis as defined in Eq. (21). We can then rewrite the energy functional as
We wish now to minimize the above functional. We introduce again a set of Lagrange multipliers, noting that since $\langle i | j \rangle = \delta_{i,j}$ and $\langle \alpha | \beta \rangle = \delta_{\alpha,\beta}$, the coefficients $C_{i\gamma}$ obey the relation
which allows us to define a functional to be minimized that reads
Minimizing with respect to $C^*_{i\alpha}$, remembering that the equations for $C^*_{i\alpha}$ and $C_{i\alpha}$ can be written as two independent equations, we obtain
which yields for every single-particle state $i$ and index $\alpha$ (recalling that the coefficients $C_{i\alpha}$ are matrix elements of a unitary (or orthogonal for a real symmetric matrix) matrix) the following Hartree-Fock equations
We can rewrite this equation as (changing dummy variables)
Note that the sums over greek indices run over the number of basis set functions (in principle an infinite number).
Defining
we can rewrite the new equations as
The latter is nothing but a standard eigenvalue problem. Compared with Eq. (20), we see that we do not need to compute any integrals in an iterative procedure for solving the equations. It suffices to tabulate the matrix elements $\langle \alpha | h | \beta \rangle$ and $\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}$ once and for all. Successive iterations require thus only a look-up in tables over one-body and two-body matrix elements. These details will be discussed below when we solve the Hartree-Fock equations numerical.
Our Hartree-Fock matrix is thus
The Hartree-Fock equations are solved in an iterative waym starting with a guess for the coefficients $C_{j\gamma}=\delta_{j,\gamma}$ and solving the equations by diagonalization till the new single-particle energies $\epsilon_i^{\mathrm{HF}}$ do not change anymore by a prefixed quantity.
Normally we assume that the single-particle basis $|\beta\rangle$ forms an eigenbasis for the operator $\hat{h}_0$, meaning that the Hartree-Fock matrix becomes
The Hartree-Fock eigenvalue problem
can be written out in a more compact form as
The Hartree-Fock equations are, in their simplest form, solved in an iterative way, starting with a guess for the coefficients $C_{i\alpha}$. We label the coefficients as $C_{i\alpha}^{(n)}$, where the subscript $n$ stands for iteration $n$. To set up the algorithm we can proceed as follows:
We start with a guess $C_{i\alpha}^{(0)}=\delta_{i,\alpha}$. Alternatively, we could have used random starting values as long as the vectors are normalized. Another possibility is to give states below the Fermi level a larger weight.
The Hartree-Fock matrix simplifies then to (assuming that the coefficients $C_{i\alpha} $ are real)
Solving the Hartree-Fock eigenvalue problem yields then new eigenvectors $C_{i\alpha}^{(1)}$ and eigenvalues $\epsilon_i^{HF(1)}$.
The diagonalization with the new Hartree-Fock potential yields new eigenvectors and eigenvalues. This process is continued till for example
where $\lambda$ is a user prefixed quantity ($\lambda \sim 10^{-8}$ or smaller) and $p$ runs over all calculated single-particle energies and $m$ is the number of single-particle states.
We can rewrite the ground state energy by adding and subtracting $\hat{u}^{HF}(x_i)$
which results in
Our single-particle states $ijk\dots$ are now single-particle states obtained from the solution of the Hartree-Fock equations.
Using our definition of the Hartree-Fock single-particle energies we obtain then the following expression for the total ground-state energy
This form will be used in our discussion of Koopman's theorem.
We have
where $\Phi^{\mathrm{HF}}(N)$ is the new Slater determinant defined by the new basis of Eq. (21) for $N$ electrons (same $Z$). If we assume that the single-particle wave functions in the new basis do not change when we remove one electron or add one electron, we can then define the corresponding energy for the $N-1$ systems as
where we have removed a single-particle state $k\le F$, that is a state below the Fermi level.
Calculating the difference
we obtain
which is just our definition of the Hartree-Fock single-particle energy
Similarly, we can now compute the difference (we label the single-particle states above the Fermi level as $abcd > F$)
These two equations can thus be used to the electron affinity or ionization energies, respectively. Koopman's theorem states that for example the ionization energy of a closed-shell system is given by the energy of the highest occupied single-particle state. If we assume that changing the number of electrons from $N$ to $N+1$ does not change the Hartree-Fock single-particle energies and eigenfunctions, then Koopman's theorem simply states that the ionization energy of an atom is given by the single-particle energy of the last bound state. In a similar way, we can also define the electron affinities.
As an example, consider a simple model for atomic sodium, Na. Neutral sodium has eleven electrons, with the weakest bound one being confined the $3s$ single-particle quantum numbers. The energy needed to remove an electron from neutral sodium is rather small, 5.1391 eV, a feature which pertains to all alkali metals. Having performed a Hartree-Fock calculation for neutral sodium would then allows us to compute the ionization energy by using the single-particle energy for the $3s$ states, namely $\epsilon_{3s}^{\mathrm{HF}}$.
From these considerations, we see that Hartree-Fock theory allows us to make a connection between experimental
observables (here ionization and affinity energies) and the underlying interactions between particles.
In this sense, we are now linking the dynamics and structure of a many-body system with the laws of motion which govern the system. Our approach is a reductionistic one, meaning that we want to understand the laws of motion
in terms of the particles or degrees of freedom which we believe are the fundamental ones. Our Slater determinant, being constructed as the product of various single-particle functions, follows this philosophy.
With similar arguments as in atomic physics, we can now use Hartree-Fock theory to make a link between nuclear forces and separation energies. Changing to nuclear system, we define
where $\Phi^{\mathrm{HF}}(A)$ is the new Slater determinant defined by the new basis of Eq. (21) for $A$ nucleons, where $A=N+Z$, with $N$ now being the number of neutrons and $Z$ th enumber of protons. If we assume again that the single-particle wave functions in the new basis do not change from a nucleus with $A$ nucleons to a nucleus with $A-1$ nucleons, we can then define the corresponding energy for the $A-1$ systems as
where we have removed a single-particle state $k\le F$, that is a state below the Fermi level.
Calculating the difference
which becomes
which is just our definition of the Hartree-Fock single-particle energy
Similarly, we can now compute the difference (recall that the single-particle states $abcd > F$)
If we then recall that the binding energy differences
define the separation energies, we see that the Hartree-Fock single-particle energies can be used to define separation energies. We have thus our first link between nuclear forces (included in the potential energy term) and an observable quantity defined by differences in binding energies.
We have thus the following interpretations (if the single-particle field do not change)
and
If we use ${}^{16}\mbox{O}$ as our closed-shell nucleus, we could then interpret the separation energy
and
Similalry, we could interpret
and
We can continue like this for all $A\pm 1$ nuclei where $A$ is a good closed-shell (or subshell closure) nucleus. Examples are ${}^{22}\mbox{O}$, ${}^{24}\mbox{O}$, ${}^{40}\mbox{Ca}$, ${}^{48}\mbox{Ca}$, ${}^{52}\mbox{Ca}$, ${}^{54}\mbox{Ca}$, ${}^{56}\mbox{Ni}$, ${}^{68}\mbox{Ni}$, ${}^{78}\mbox{Ni}$, ${}^{90}\mbox{Zr}$, ${}^{88}\mbox{Sr}$, ${}^{100}\mbox{Sn}$, ${}^{132}\mbox{Sn}$ and ${}^{208}\mbox{Pb}$, to mention some possile cases.
We can thus make our first interpretation of the separation energies in terms of the simplest possible many-body theory. If we also recall that the so-called energy gap for neutrons (or protons) is defined as
for neutrons and the corresponding gap for protons
we can define the neutron and proton energy gaps for ${}^{16}\mbox{O}$ as
and
brings us into the new basis.
The new basis has quantum numbers $a=1,2,\dots,N$.
a) Show that the new basis is orthonormal.
b) Show that the new Slater determinant constructed from the new single-particle wave functions can be written as the determinant based on the previous basis and the determinant of the matrix $C$.
c) Show that the old and the new Slater determinants are equal up to a complex constant with absolute value unity.
Hint. Hint, $C$ is a unitary matrix.
Consider the Slater determinant
A small variation in this function is given by
a) Show that
Second quantization and operators, two-body operator with examples
Wick's theorem (missing in html and ipython notebook but in pdf files)
Thouless' theorem and analysis of the Hartree-Fock equations using second quantization
Examples on how to use bit representations for Slater determinants
TO DO: add material about Wick's theorem and diagrammatic representation
TO DO: Link determinantal analysis with Thouless' theorem
We introduce the time-independent operators $a_\alpha^{\dagger}$ and $a_\alpha$ which create and annihilate, respectively, a particle in the single-particle state $\varphi_\alpha$. We define the fermion creation operator $a_\alpha^{\dagger}$
and
In Eq. (25) the operator $a_\alpha^{\dagger}$ acts on the vacuum state $|0\rangle$, which does not contain any particles. Alternatively, we could define a closed-shell nucleus or atom as our new vacuum, but then we need to introduce the particle-hole formalism, see the discussion to come.
In Eq. (26) $a_\alpha^{\dagger}$ acts on an antisymmetric $n$-particle state and creates an antisymmetric $(n+1)$-particle state, where the one-body state $\varphi_\alpha$ is occupied, under the condition that $\alpha \ne \alpha_1, \alpha_2, \dots, \alpha_n$. It follows that we can express an antisymmetric state as the product of the creation operators acting on the vacuum state.
It is easy to derive the commutation and anticommutation rules for the fermionic creation operators $a_\alpha^{\dagger}$. Using the antisymmetry of the states (27)
we obtain
Using the Pauli principle
it follows that
The hermitian conjugate of $a_\alpha^{\dagger}$ is
If we take the hermitian conjugate of Eq. (32), we arrive at
What is the physical interpretation of the operator $a_\alpha$ and what is the effect of $a_\alpha$ on a given state $|\alpha_1\alpha_2\dots\alpha_n\rangle_{\mathrm{AS}}$? Consider the following matrix element
where both sides are antisymmetric. We distinguish between two cases. The first (1) is when $\alpha \in \{\alpha_i\}$. Using the Pauli principle of Eq. (30) it follows
The second (2) case is when $\alpha \notin \{\alpha_i\}$. It follows that an hermitian conjugation
Here we must have $m = n+1$ if Eq. (38) has to be trivially different from zero.
For the last case, the minus and plus signs apply when the sequence $\alpha ,\alpha_1, \alpha_2, \dots, \alpha_n$ and $\alpha_1', \alpha_2', \dots, \alpha_{n+1}'$ are related to each other via even and odd permutations. If we assume that $\alpha \notin \{\alpha_i\}$ we obtain
when $\alpha \in \{\alpha_i'\}$. If $\alpha \notin \{\alpha_i'\}$, we obtain
and in particular
If $\{\alpha\alpha_i\} = \{\alpha_i'\}$, performing the right permutations, the sequence $\alpha ,\alpha_1, \alpha_2, \dots, \alpha_n$ is identical with the sequence $\alpha_1', \alpha_2', \dots, \alpha_{n+1}'$. This results in
and thus
The action of the operator $a_\alpha$ from the left on a state vector is to to remove one particle in the state $\alpha$. If the state vector does not contain the single-particle state $\alpha$, the outcome of the operation is zero. The operator $a_\alpha$ is normally called for a destruction or annihilation operator.
The next step is to establish the commutator algebra of $a_\alpha^{\dagger}$ and $a_\beta$.
The action of the anti-commutator $\{a_\alpha^{\dagger}$,$a_\alpha\}$ on a given $n$-particle state is
if the single-particle state $\alpha$ is not contained in the state.
If it is present we arrive at
The action of $\left\{a_\alpha^{\dagger}, a_\beta\right\}$, with $\alpha \ne \beta$ on a given state yields three possibilities. The first case is a state vector which contains both $\alpha$ and $\beta$, then either $\alpha$ or $\beta$ and finally none of them.
The first case results in
while the second case gives
Finally if the state vector does not contain $\alpha$ and $\beta$
For all three cases we have
with $\delta_{\alpha\beta}$ is the Kroenecker $\delta$-symbol.
The properties of the creation and annihilation operators can be summarized as (for fermions)
and
from which follows
The hermitian conjugate has the folowing properties
Finally we found
and
and the corresponding commutator algebra
A very useful operator is the so-called number-operator. Most physics cases we will study in this text conserve the total number of particles. The number operator is therefore a useful quantity which allows us to test that our many-body formalism conserves the number of particles. In for example $(d,p)$ or $(p,d)$ reactions it is important to be able to describe quantum mechanical states where particles get added or removed. A creation operator $a_\alpha^{\dagger}$ adds one particle to the single-particle state $\alpha$ of a give many-body state vector, while an annihilation operator $a_\alpha$ removes a particle from a single-particle state $\alpha$.
Let us consider an operator proportional with $a_\alpha^{\dagger} a_\beta$ and $\alpha=\beta$. It acts on an $n$-particle state resulting in
Summing over all possible one-particle states we arrive at
The operator
is called the number operator since it counts the number of particles in a give state vector when it acts on the different single-particle states. It acts on one single-particle state at the time and falls therefore under category one-body operators. Next we look at another important one-body operator, namely $\hat{H}_0$ and study its operator form in the occupation number representation.
We want to obtain an expression for a one-body operator which conserves the number of particles. Here we study the one-body operator for the kinetic energy plus an eventual external one-body potential. The action of this operator on a particular $n$-body state with its pertinent expectation value has already been studied in coordinate space. In coordinate space the operator reads
and the anti-symmetric $n$-particle Slater determinant is defined as
Defining
we can easily evaluate the action of $\hat{H}_0$ on each product of one-particle functions in Slater determinant. From Eq. (55) we obtain the following result without permuting any particle pair
If we interchange particles $1$ and $2$ we obtain
We can continue by computing all possible permutations. We rewrite also our Slater determinant in its second quantized form and skip the dependence on the quantum numbers $x_i.$ Summing up all contributions and taking care of all phases $(-1)^p$ we arrive at
Inserting this in the right-hand side of Eq. (58) results in
In the number occupation representation or second quantization we get the following expression for a one-body operator which conserves the number of particles
Obviously, $\hat{H}_0$ can be replaced by any other one-body operator which preserved the number of particles. The stucture of the operator is therefore not limited to say the kinetic or single-particle energy only.
The opearator $\hat{H}_0$ takes a particle from the single-particle state $\beta$ to the single-particle state $\alpha$ with a probability for the transition given by the expectation value $\langle \alpha|\hat{h}_0|\beta\rangle$.
It is instructive to verify Eq. (61) by computing the expectation value of $\hat{H}_0$ between two single-particle states
Using the commutation relations for the creation and annihilation operators we have
which results in
and
Let us now derive the expression for our two-body interaction part, which also conserves the number of particles. We can proceed in exactly the same way as for the one-body operator. In the coordinate representation our two-body interaction part takes the following expression
where the summation runs over distinct pairs. The term $V$ can be an interaction model for the nucleon-nucleon interaction or the interaction between two electrons. It can also include additional two-body interaction terms.
The action of this operator on a product of two single-particle functions is defined as
We can now let $\hat{H}_I$ act on all terms in the linear combination for $|\alpha_1\alpha_2\dots\alpha_n\rangle$. Without any permutations we have
where on the rhs we have a term for each distinct pairs.
For the other terms on the rhs we obtain similar expressions and summing over all terms we obtain
We introduce second quantization via the relation
Inserting this in (69) gives
Here we let $\sum'$ indicate that the sums running over $\alpha$ and $\beta$ run over all single-particle states, while the summations $\gamma$ and $\delta$ run over all pairs of single-particle states. We wish to remove this restriction and since
we get
where we have used the anti-commutation rules.
Changing the summation indices $\alpha$ and $\beta$ in (74) we obtain
From this it follows that the restriction on the summation over $\gamma$ and $\delta$ can be removed if we multiply with a factor $\frac{1}{2}$, resulting in
where we sum freely over all single-particle states $\alpha$, $\beta$, $\gamma$ og $\delta$.
With this expression we can now verify that the second quantization form of $\hat{H}_I$ in Eq. (76) results in the same matrix between two anti-symmetrized two-particle states as its corresponding coordinate space representation. We have
Using the commutation relations we get
The vacuum expectation value of this product of operators becomes
The two-body operator can also be expressed in terms of the anti-symmetrized matrix elements we discussed previously as
The factors in front of the operator, either $\frac{1}{4}$ or $\frac{1}{2}$ tells whether we use antisymmetrized matrix elements or not.
We can now express the Hamiltonian operator for a many-fermion system in the occupation basis representation as
This is the form we will use in the rest of these lectures, assuming that we work with anti-symmetrized two-body matrix elements.
Second quantization is a useful and elegant formalism for constructing many-body states and
quantum mechanical operators. One can express and translate many physical processes
into simple pictures such as Feynman diagrams. Expecation values of many-body states are also easily calculated.
However, although the equations are seemingly easy to set up, from a practical point of view, that is
the solution of Schroedinger's equation, there is no particular gain.
The many-body equation is equally hard to solve, irrespective of representation.
The cliche that
there is no free lunch brings us down to earth again.
Note however that a transformation to a particular
basis, for cases where the interaction obeys specific symmetries, can ease the solution of Schroedinger's equation.
But there is at least one important case where second quantization comes to our rescue. It is namely easy to introduce another reference state than the pure vacuum $|0\rangle $, where all single-particle states are active. With many particles present it is often useful to introduce another reference state than the vacuum state$|0\rangle $. We will label this state $|c\rangle$ ($c$ for core) and as we will see it can reduce considerably the complexity and thereby the dimensionality of the many-body problem. It allows us to sum up to infinite order specific many-body correlations. The particle-hole representation is one of these handy representations.
In the original particle representation these states are products of the creation operators $a_{\alpha_i}^\dagger$ acting on the true vacuum $|0\rangle $. Following Eq. (27) we have
If we use Eq. (83) as our new reference state, we can simplify considerably the representation of this state
The new reference states for the $n+1$ and $n-1$ states can then be written as
The first state has one additional particle with respect to the new vacuum state $|c\rangle $ and is normally referred to as a one-particle state or one particle added to the many-body reference state. The second state has one particle less than the reference vacuum state $|c\rangle $ and is referred to as a one-hole state. When dealing with a new reference state it is often convenient to introduce new creation and annihilation operators since we have from Eq. (88)
since $\alpha$ is contained in $|c\rangle $, while for the true vacuum we have $a_\alpha |0\rangle = 0$ for all $\alpha$.
The new reference state leads to the definition of new creation and annihilation operators which satisfy the following relations
We assume also that the new reference state is properly normalized
The physical interpretation of these new operators is that of so-called quasiparticle states. This means that a state defined by the addition of one extra particle to a reference state $|c\rangle $ may not necesseraly be interpreted as one particle coupled to a core. We define now new creation operators that act on a state $\alpha$ creating a new quasiparticle state
where $F$ is the Fermi level representing the last occupied single-particle orbit of the new reference state $|c\rangle $.
The annihilation is the hermitian conjugate of the creation operator
resulting in
With the new creation and annihilation operator we can now construct many-body quasiparticle states, with one-particle-one-hole states, two-particle-two-hole states etc in the same fashion as we previously constructed many-particle states. We can write a general particle-hole state as
We can now rewrite our one-body and two-body operators in terms of the new creation and annihilation operators. The number operator becomes
where $n_c$ is the number of particle in the new vacuum state $|c\rangle $.
The action of $\hat{N}$ on a many-body state results in
Here $n=n_p +n_c - n_h$ is the total number of particles in the quasi-particle state of Eq. (95). Note that $\hat{N}$ counts the total number of particles present
gives us the number of quasi-particles as can be seen by computing
where $n_{qp} = n_p + n_h$ is the total number of quasi-particles.
We express the one-body operator $\hat{H}_0$ in terms of the quasi-particle creation and annihilation operators, resulting in
The first term gives contribution only for particle states, while the last one contributes only for holestates. The second term can create or destroy a set of quasi-particles and the third term is the contribution from the vacuum state $|c\rangle$.
Before we continue with the expressions for the two-body operator, we introduce a nomenclature we will use for the rest of this text. It is inspired by the notation used in quantum chemistry. We reserve the labels $i,j,k,\dots$ for hole states and $a,b,c,\dots$ for states above $F$, viz. particle states. This means also that we will skip the constraint $\leq F$ or $> F$ in the summation symbols. Our operator $\hat{H}_0$ reads now
The two-particle operator in the particle-hole formalism is more complicated since we have to translate four indices $\alpha\beta\gamma\delta$ to the possible combinations of particle and hole states. When performing the commutator algebra we can regroup the operator in five different terms
Using anti-symmetrized matrix elements, bthe term $\hat{H}_I^{(a)}$ is
The next term $\hat{H}_I^{(b)}$ reads
This term conserves the number of quasiparticles but creates or removes a three-particle-one-hole state. For $\hat{H}_I^{(c)}$ we have
The first line stands for the creation of a two-particle-two-hole state, while the second line represents the creation to two one-particle-one-hole pairs while the last term represents a contribution to the particle single-particle energy from the hole states, that is an interaction between the particle states and the hole states within the new vacuum state. The fourth term reads
The terms in the first line stand for the creation of a particle-hole state interacting with hole states, we will label this as a two-hole-one-particle contribution. The remaining terms are a particle-hole state interacting with the holes in the vacuum state. Finally we have
The first terms represents the interaction between two holes while the second stands for the interaction between a hole and the remaining holes in the vacuum state. It represents a contribution to single-hole energy to first order. The last term collects all contributions to the energy of the ground state of a closed-shell system arising from hole-hole correlations.
which is equivalent with $|\alpha_1 \dots \alpha_A\rangle= a_{\alpha_1}^{\dagger} \dots a_{\alpha_A}^{\dagger} |0\rangle$. We have also
3 2 8
< < < ! ! M A T H _ B L O C K
and
3 3 0
< < < ! ! M A T H _ B L O C K
3 3 2
< < < ! ! M A T H _ B L O C K
with $i,j,\ldots \leq \alpha_F, \quad a,b,\ldots > \alpha_F, \quad p,q, \ldots - \textrm{any}$
and
The one-body operator is defined as
while the two-body opreator is defined as
where we have defined the antisymmetric matrix elements
We can also define a three-body operator
with the antisymmetrized matrix element
We wish now to derive the Hartree-Fock equations using our second-quantized formalism and study the stability of the equations. Our ansatz for the ground state of the system is approximated as (this is our representation of a Slater determinant in second quantization)
We wish to determine $\hat{u}^{HF}$ so that $E_0^{HF}= \langle c|\hat{H}| c\rangle$ becomes a local minimum.
In our analysis here we will need Thouless' theorem, which states that an arbitrary Slater determinant $|c'\rangle$ which is not orthogonal to a determinant $| c\rangle ={\displaystyle\prod_{i=1}^{n}} a_{\alpha_{i}}^{\dagger}|0\rangle$, can be written as
Let us give a simple proof of Thouless' theorem. The theorem states that we can make a linear combination av particle-hole excitations with respect to a given reference state $\vert c\rangle$. With this linear combination, we can make a new Slater determinant $\vert c'\rangle $ which is not orthogonal to $\vert c\rangle$, that is
To show this we need some intermediate steps. The exponential product of two operators $\exp{\hat{A}}\times\exp{\hat{B}}$ is equal to $\exp{(\hat{A}+\hat{B})}$ only if the two operators commute, that is
If the operators do not commute, we need to resort to the Baker-Campbell-Hauersdorf. This relation states that
with
From these relations, we note that in our expression for $|c'\rangle$ we have commutators of the type
and it is easy to convince oneself that these commutators, or higher powers thereof, are all zero. This means that we can write out our new representation of a Slater determinant as
We note that
and all higher-order powers of these combinations of creation and annihilation operators disappear due to the fact that $(a_i)^n| c\rangle =0$ when $n > 1$. This allows us to rewrite the expression for $|c'\rangle $ as
which we can rewrite as
The last equation can be written as
we have
meaning that the new representation of the Slater determinant in second quantization, $|c'\rangle$, looks like our previous ones. However, this representation is not general enough since we have a restriction on the sum over single-particle states in Eq. (108). The single-particle states have all to be above the Fermi level. The question then is whether we can construct a general representation of a Slater determinant with a creation operator
where $f_{ip}$ is a matrix element of a unitary matrix which transforms our creation and annihilation operators $a^{\dagger}$ and $a$ to $\tilde{b}^{\dagger}$ and $\tilde{b}$. These new operators define a new representation of a Slater determinant as
which is nothing but the determinant $det(f_{ip})$ which we can, using the intermediate normalization condition, normalize to one, that is
meaning that $f$ has an inverse defined as (since we are dealing with orthogonal, and in our case unitary as well, transformations)
and
Using these relations we can then define the linear combination of creation (and annihilation as well) operators as
Defining
we can redefine
our starting point. We have shown that our general representation of a Slater determinant
with
This means that we can actually write an ansatz for the ground state of the system as a linear combination of terms which contain the ansatz itself $|c\rangle$ with an admixture from an infinity of one-particle-one-hole states. The latter has important consequences when we wish to interpret the Hartree-Fock equations and their stability. We can rewrite the new representation as
where $|\delta c\rangle$ can now be interpreted as a small variation. If we approximate this term with contributions from one-particle-one-hole (1p-1h) states only, we arrive at
In our derivation of the Hartree-Fock equations we have shown that
which means that we have to satisfy
With this as a background, we are now ready to study the stability of the Hartree-Fock equations.
The variational condition for deriving the Hartree-Fock equations guarantees only that the expectation value $\langle c | \hat{H} | c \rangle$ has an extreme value, not necessarily a minimum. To figure out whether the extreme value we have found is a minimum, we can use second quantization to analyze our results and find a criterion for the above expectation value to a local minimum. We will use Thouless' theorem and show that
with
Using Thouless' theorem we can write out ${|c'\rangle}$ as
where the amplitudes $\delta C$ are small.
The norm of $|c'\rangle$ is given by (using the intermediate normalization condition $\langle c' |c\rangle=1$)
The expectation value for the energy is now given by (using the Hartree-Fock condition)
3 7 6
< < < ! ! M A T H _ B L O C K
We have already calculated the second term on the right-hand side of the previous equation
resulting in
which is nothing but
or
where we have used the relation
due to the hermiticity of $\hat{H}$ and $\hat{V}$.
We define two matrix elements
and
both being anti-symmetrized.
With these definitions we write out the energy as
which can be rewritten as
and skipping higher-order terms we arrived
We have defined
with the vectors
and the matrix
with $\Delta_{ai,bj} = (\varepsilon_a-\varepsilon_i)\delta_{ab}\delta_{ij}$.
The condition
for an arbitrary vector
means that all eigenvalues of the matrix have to be larger than or equal zero. A necessary (but no sufficient) condition is that the matrix elements (for all $ai$ )
This equation can be used as a first test of the stability of the Hartree-Fock equation.
In the build-up of a shell-model or FCI code that is meant to tackle large dimensionalities is the action of the Hamiltonian $\hat{H}$ on a Slater determinant represented in second quantization as
The time consuming part stems from the action of the Hamiltonian on the above determinant,
A practically useful way to implement this action is to encode a Slater determinant as a bit pattern.
Assume that we have at our disposal $n$ different single-particle orbits $\alpha_0,\alpha_2,\dots,\alpha_{n-1}$ and that we can distribute among these orbits $N\le n$ particles.
A Slater determinant can then be coded as an integer of $n$ bits. As an example, if we have $n=16$ single-particle states $\alpha_0,\alpha_1,\dots,\alpha_{15}$ and $N=4$ fermions occupying the states $\alpha_3$, $\alpha_6$, $\alpha_{10}$ and $\alpha_{13}$ we could write this Slater determinant as
The unoccupied single-particle states have bit value $0$ while the occupied ones are represented by bit state $1$. In the binary notation we would write this 16 bits long integer as
which translates into the decimal number
We can thus encode a Slater determinant as a bit pattern.
With $N$ particles that can be distributed over $n$ single-particle states, the total number of Slater determinats (and defining thereby the dimensionality of the system) is
The total number of bit patterns is $2^n$.
We assume again that we have at our disposal $n$ different single-particle orbits $\alpha_0,\alpha_2,\dots,\alpha_{n-1}$ and that we can distribute among these orbits $N\le n$ particles. The ordering among these states is important as it defines the order of the creation operators. We will write the determinant
in a more compact way as
The action of a creation operator is thus
which becomes
Similarly
which becomes
This gives a simple recipe:
If one of the bits $b_j$ is $1$ and we act with a creation operator on this bit, we return a null vector
If $b_j=0$, we set it to $1$ and return a sign factor $(-1)^l$, where $l$ is the number of bits set before bit $j$.
Consider the action of $a^{\dagger}_{\alpha_2}$ on various slater determinants:
What is the simplest way to obtain the phase when we act with one annihilation(creation) operator on the given Slater determinant representation?
We have an SD representation
in a more compact way as
The action of
which becomes
The action
can be obtained by subtracting the logical sum (AND operation) of $\Phi_{0,3,6,10,13}$ and a word which represents only $\alpha_0$, that is
from $\Phi_{0,3,6,10,13}= |1001001000100100\rangle$.
This operation gives $|0001001000100100\rangle$.
Similarly, we can form $a^{\dagger}_{\alpha_4}a_{\alpha_0}\Phi_{0,3,6,10,13}$, say, by adding $|0000100000000000\rangle$ to $a_{\alpha_0}\Phi_{0,3,6,10,13}$, first checking that their logical sum is zero in order to make sure that orbital $\alpha_4$ is not already occupied.
It is trickier however to get the phase $(-1)^l$. One possibility is as follows
In the previous example $S_1=|1000000000000000\rangle$
$S_2=|0000100000000000\rangle$.
which results in $|0001000000000000\rangle$. Counting the number of $1-$bits gives the phase. Here you need however an algorithm for bitcounting. Several efficient ones available.
This exercise serves to convince you about the relation between two different single-particle bases, where one could be our new Hartree-Fock basis and the other a harmonic oscillator basis.
Consider a Slater determinant built up of single-particle orbitals $\psi_{\lambda}$, with $\lambda = 1,2,\dots,A$. The unitary transformation
brings us into the new basis.
The new basis has quantum numbers $a=1,2,\dots,A$.
Show that the new basis is orthonormal.
Show that the new Slater determinant constructed from the new single-particle wave functions can be
written as the determinant based on the previous basis and the determinant of the matrix $C$.
Show that the old and the new Slater determinants are equal up to a complex constant with absolute value unity.
(Hint, $C$ is a unitary matrix).
Starting with the second quantization representation of the Slater determinant
and
with
4 2 2
< < < ! ! M A T H _ B L O C K
4 2 3
< < < ! ! M A T H _ B L O C K
4 2 4
< < < ! ! M A T H _ B L O C K
and
can be written, using standard annihilation and creation operators, in normal-ordered form as
can be written, using standard annihilation and creation operators, in normal-ordered form as
Explain again the meaning of the various symbols.
This exercise is optional: Derive the normal-ordered form of the threebody part of the Hamiltonian.
and specify the contributions to the twobody, onebody and the scalar part.
The aim of this exercise is to set up specific matrix elements that will turn useful when we start our discussions of the nuclear shell model. In particular you will notice, depending on the character of the operator, that many matrix elements will actually be zero.
Consider three $N$-particle Slater determinants $|\Phi_0$, $|\Phi_i^a\rangle$ and $|\Phi_{ij}^{ab}\rangle$, where the notation means that Slater determinant $|\Phi_i^a\rangle$ differs from $|\Phi_0\rangle$ by one single-particle state, that is a single-particle state $\psi_i$ is replaced by a single-particle state $\psi_a$. It is often interpreted as a so-called one-particle-one-hole excitation. Similarly, the Slater determinant $|\Phi_{ij}^{ab}\rangle$ differs by two single-particle states from $|\Phi_0\rangle$ and is normally thought of as a two-particle-two-hole excitation. We assume also that $|\Phi_0\rangle$ represents our new vacuum reference state and the labels $ijk\dots$ represent single-particle states below the Fermi level and $abc\dots$ represent states above the Fermi level, so-called particle states. We define thereafter a general onebody normal-ordered (with respect to the new vacuum state) operator as
with
and a general normal-ordered two-body operator
with for example the direct matrix element given as
with $g$ being invariant under the interchange of the coordinates of two particles. The single-particle states $\psi_i$ are not necessarily eigenstates of $\hat{f}$. The curly brackets mean that the operators are normal-ordered with respect to the new vacuum reference state.
How would you write the above Slater determinants in a second quantization formalism, utilizing the fact that we have defined a new reference state?
Use thereafter Wick's theorem to find the expectation values of
and
Find thereafter
and
Finally, find
and
What happens with the two-body operator if we have a transition probability of the type
where the Slater determinant to the right of the operator differs by more than two single-particle states?
Write a program which sets up all possible Slater determinants given $N=4$ eletrons which can occupy the atomic single-particle states defined by the $1s$, $2s2p$ and $3s3p3d$ shells. How many single-particle states $n$ are there in total? Include the spin degrees of freedom as well.
Compute the matrix element
using Wick's theorem and express the two-body operator $G$ in the occupation number (second quantization) representation.
The last exercise can be solved using the symbolic Python package called SymPy. SymPy is a Python package for general purpose symbolic algebra. There is a physics module with several interesting submodules. Among these, the submodule called secondquant, contains several functionalities that allow us to test our algebraic manipulations using Wick's theorem and operators for second quantization.
In [1]:
from sympy import *
from sympy.physics.secondquant import *
i, j = symbols('i,j', below_fermi=True)
a, b = symbols('a,b', above_fermi=True)
p, q = symbols('p,q')
print simplify(wicks(Fd(i)*F(a)*Fd(p)*F(q)*Fd(b)*F(j), keep_only_fully_contracted=True))
The code defines single-particle states above and below the Fermi level, in addition to the genereal symbols $pq$ which can refer to any type of state below or above the Fermi level. Wick's theorem is implemented between the creation and annihilation operators Fd and F, respectively. Using the simplify option, one can lump together several Kronecker-$\delta$ functions.
We can expand the above Python code by defining one-body and two-body operators using the following SymPy code
In [1]:
# This code sets up a two-body Hamiltonian for fermions
from sympy import symbols, latex, WildFunction, collect, Rational
from sympy.physics.secondquant import F, Fd, wicks, AntiSymmetricTensor, substitute_dummies, NO
# setup hamiltonian
p,q,r,s = symbols('p q r s',dummy=True)
f = AntiSymmetricTensor('f',(p,),(q,))
pr = NO((Fd(p)*F(q)))
v = AntiSymmetricTensor('v',(p,q),(r,s))
pqsr = NO(Fd(p)*Fd(q)*F(s)*F(r))
Hamiltonian=f*pr + Rational(1)/Rational(4)*v*pqsr
print "Hamiltonian defined as:", latex(Hamiltonian)
Here we have used the AntiSymmetricTensor functionality, together with normal-ordering defined by the NO function. Using the latex option, this program produces the following output
In [1]:
from sympy import symbols, latex, WildFunction, collect, Rational, simplify
from sympy.physics.secondquant import F, Fd, wicks, AntiSymmetricTensor, substitute_dummies, NO, evaluate_deltas
# setup hamiltonian
p,q,r,s = symbols('p q r s',dummy=True)
f = AntiSymmetricTensor('f',(p,),(q,))
pr = NO((Fd(p)*F(q)))
v = AntiSymmetricTensor('v',(p,q),(r,s))
pqsr = NO(Fd(p)*Fd(q)*F(s)*F(r))
Hamiltonian=f*pr + Rational(1)/Rational(4)*v*pqsr
c,d = symbols('c, d',above_fermi=True)
a,b = symbols('a, b',above_fermi=True)
expression = wicks(F(b)*F(a)*Hamiltonian*Fd(c)*Fd(d),keep_only_fully_contracted=True, simplify_kronecker_deltas=True)
expression = evaluate_deltas(expression)
expression = simplify(expression)
print "Hamiltonian defined as:", latex(expression)
The result is as expected,
In [1]:
from sympy import symbols, latex, WildFunction, collect, Rational, simplify
from sympy.physics.secondquant import F, Fd, wicks, AntiSymmetricTensor, substitute_dummies, NO, evaluate_deltas
# setup hamiltonian
p,q,r,s = symbols('p q r s',dummy=True)
f = AntiSymmetricTensor('f',(p,),(q,))
pr = Fd(p)*F(q)
v = AntiSymmetricTensor('v',(p,q),(r,s))
pqsr = Fd(p)*Fd(q)*F(s)*F(r)
#define the Hamiltonian
Hamiltonian = f*pr + Rational(1)/Rational(4)*v*pqsr
#define indices for states above and below the Fermi level
index_rule = {
'below': 'kl',
'above': 'cd',
'general': 'pqrs'
}
Hnormal = substitute_dummies(Hamiltonian,new_indices=True, pretty_indices=index_rule)
print "Hamiltonian defined as:", latex(Hnormal)
which results in
In [1]:
from sympy import symbols, latex, WildFunction, collect, Rational, simplify
from sympy.physics.secondquant import F, Fd, wicks, AntiSymmetricTensor, substitute_dummies, NO, evaluate_deltas
# setup hamiltonian
p,q,r,s = symbols('p q r s',dummy=True)
f = AntiSymmetricTensor('f',(p,),(q,))
pr = Fd(p)*F(q)
v = AntiSymmetricTensor('v',(p,q),(r,s))
pqsr = Fd(p)*Fd(q)*F(s)*F(r)
#define the Hamiltonian
Hamiltonian=f*pr + Rational(1)/Rational(4)*v*pqsr
#define indices for states above and below the Fermi level
index_rule = {
'below': 'kl',
'above': 'cd',
'general': 'pqrs'
}
Hnormal = substitute_dummies(Hamiltonian,new_indices=True, pretty_indices=index_rule)
E0 = wicks(Hnormal,keep_only_fully_contracted=True)
Hnormal = Hnormal-E0
w = WildFunction('w')
Hnormal = collect(Hnormal, NO(w))
Hnormal = evaluate_deltas(Hnormal)
print latex(Hnormal)
which gives us
In [1]:
from sympy import symbols, latex, WildFunction, collect, Rational, simplify
from sympy.physics.secondquant import F, Fd, wicks, AntiSymmetricTensor, substitute_dummies, NO, evaluate_deltas
# setup hamiltonian
p,q,r,s = symbols('p q r s',dummy=True)
v = AntiSymmetricTensor('v',(p,q),(r,s))
pqsr = NO(Fd(p)*Fd(q)*F(s)*F(r))
Hamiltonian=Rational(1)/Rational(4)*v*pqsr
a,b,c,d,e,f = symbols('a,b, c, d, e, f',above_fermi=True)
expression = wicks(F(c)*F(b)*F(a)*Hamiltonian*Fd(d)*Fd(e)*Fd(f),keep_only_fully_contracted=True, simplify_kronecker_deltas=True)
expression = evaluate_deltas(expression)
expression = simplify(expression)
print latex(expression)
resulting in nine terms (as expected),
(Represent $(-u^{HF})$ by the symbol $---$X .)
Consider the ground state $|\Phi\rangle$ of a bound many-particle system of fermions. Assume that we remove one particle from the single-particle state $\lambda$ and that our system ends in a new state $|\Phi_{n}\rangle$. Define the energy needed to remove this particle as
where $E_{0}$ and $E_{n}$ are the ground state energies of the states $|\Phi\rangle$ and $|\Phi_{n}\rangle$, respectively.
where $H$ is the Hamiltonian of this system.
relation between $E_{\lambda}$ and the single-particle energy $\varepsilon_{\lambda}$ for states $\lambda \leq F$ and $\lambda >F$, with
and
We have assumed an antisymmetrized matrix element here. Discuss the result.
The Hamiltonian operator is defined as
Discussion of single-particle and two-particle quantum numbers, uncoupled and coupled schemes
Discussion of nuclear forces and models thereof (this material will not be covered in depth this spring)
In order to understand the basics of the nucleon-nucleon interaction, we need to define the relevant quantum numbers and how we build up a single-particle state and a two-body state.
For the single-particle states, due to the fact that we have the spin-orbit force, the quantum numbers for the projection of orbital momentum $l$, that is $m_l$, and for spin $s$, that is $m_s$, are no longer so-called good quantum numbers. The total angular momentum $j$ and its projection $m_j$ are then so-called good quantum numbers.
This means that the operator $\hat{J}^2$ does not commute with $\hat{L}_z$ or $\hat{S}_z$.
We also start normally with single-particle state functions defined using say the harmonic oscillator. For these functions, we have no explicit dependence on $j$. How can we introduce single-particle wave functions which have $j$ and its projection $m_j$ as quantum numbers?
We have that the operators for the orbital momentum are given by
4 5 5
< < < ! ! M A T H _ B L O C K
4 5 6
< < < ! ! M A T H _ B L O C K
Since we have a spin orbit force which is strong, it is easy to show that the total angular momentum operator
does not commute with $\hat{L}_z$ and $\hat{S}_z$. To see this, we calculate for example
since we have that $[\hat{L}_z,\hat{L}_x]=i\hbar\hat{L}_y$ and $[\hat{L}_z,\hat{L}_y]=i\hbar\hat{L}_x$.
We have also
with the the following degeneracy
With a given value of $L$ and $S$ we can then determine the possible values of $J$ by studying the $z$ component of $\hat{J}$. It is given by
The operators $\hat{L}_z$ and $\hat{S}_z$ have the quantum numbers $L_z=M_L\hbar$ and $S_z=M_S\hbar$, respectively, meaning that
or
Since the max value of $M_L$ is $L$ and for $M_S$ is $S$ we obtain
For nucleons we have that the maximum value of $M_S=m_s=1/2$, yielding
Using this and the fact that the maximum value of $M_J=m_j$ is $j$ we have
To decide where this series terminates, we use the vector inequality
Using $\hat{J}=\hat{L}+\hat{S}$ we get
or
If we limit ourselves to nucleons only with $s=1/2$ we find that
It is then easy to show that for nucleons there are only two possible values of $j$ which satisfy the inequality, namely
and with $l=0$ we get
Let us study some selected examples. We need also to keep in mind that parity is conserved. The strong and electromagnetic Hamiltonians conserve parity. Thus the eigenstates can be broken down into two classes of states labeled by their parity $\pi= +1$ or $\pi=-1$. The nuclear interactions do not mix states with different parity.
For nuclear structure the total parity originates from the intrinsic parity of the nucleon which is $\pi_{\mathrm{intrinsic}}=+1$ and the parities associated with the orbital angular momenta $\pi_l=(-1)^l$ . The total parity is the product over all nucleons $\pi = \prod_i \pi_{\mathrm{intrinsic}}(i)\pi_l(i) = \prod_i (-1)^{l_i}$
The basis states we deal with are constructed so that they conserve parity and have thus a definite parity.
Note that we do have parity violating processes, more on this later although our focus will be mainly on non-parity viloating processes
Consider now the single-particle orbits of the $1s0d$ shell. For a $0d$ state we have the quantum numbers $l=2$, $m_l=-2,-1,0,1,2$, $s+1/2$, $m_s=\pm 1/2$, $n=0$ (the number of nodes of the wave function). This means that we have positive parity and
and
Our single-particle wave functions, if we use the harmonic oscillator, do however not contain the quantum numbers $j$ and $m_j$. Normally what we have is an eigenfunction for the one-body problem defined as
where we have used spherical coordinates (with a spherically symmetric potential) and the spherical harmonics
with $P_l^{m_l}$ being the so-called associated Legendre polynomials.
Examples are
for $l=m_l=0$,
for $l=1$ and $m_l=0$,
for $l=1$ and $m_l=\pm 1$,
for $l=2$ and $m_l=0$ etc.
How can we get a function in terms of $j$ and $m_j$? Define now
and
as the state with quantum numbers $jm_j$. Operating with
on the latter state we will obtain admixtures from possible $\phi_{nlm_lsm_s}(r,\theta,\phi)$ states.
To see this, we consider the following example and fix
and
It means we can have, with $l=2$ and $s=1/2$ being fixed, in order to have $m_j=3/2$ either $m_l=1$ and $m_s=1/2$ or $m_l=2$ and $m_s=-1/2$. The two states
and
will have admixtures from $\phi_{n=0l=2m_l=2s=1/2m_s=-1/2}$ and $\phi_{n=0l=2m_l=1s=1/2m_s=1/2}$.
How do we find these admixtures? Note that we don't specify the values of $m_l$ and $m_s$
in the functions $\psi$ since
$\hat{j}^2$ does not commute with $\hat{L}_z$ and $\hat{S}_z$.
We operate with
on the two $jm_j$ states, that is
4 9 0
< < < ! ! M A T H _ B L O C K
and
4 9 2
< < < ! ! M A T H _ B L O C K
This means that the eigenvectors $\phi_{n=0l=2m_l=2s=1/2m_s=-1/2}$ etc are not eigenvectors of $\hat{j}^2$. The above problems gives a $2\times2$ matrix that mixes the vectors $\psi_{n=0j=5/2m_j3/2;l=2m_ls=1/2m_s}$ and $\psi_{n=0j=3/2m_j3/2;l=2m_ls=1/2m_s}$ with the states $\phi_{n=0l=2m_l=2s=1/2m_s=-1/2}$ and $\phi_{n=0l=2m_l=1s=1/2m_s=1/2}$. The unknown coefficients $\alpha$ and $\beta$ results from eigenvectors of this matrix. That is, inserting all values $m_l,l,m_s,s$ we obtain the matrix
whose eigenvectors are the columns of
These numbers define the so-called Clebsch-Gordan coupling coefficients (the overlaps between the two basis sets). We can thus write
4 9 7
< < < ! ! M A T H _ B L O C K
4 9 8
< < < ! ! M A T H _ B L O C K
where the coefficients $\langle lm_lsm_s|jm_j\rangle$ are the so-called Clebsch-Gordan coeffficients. The relevant quantum numbers are $n$ (related to the principal quantum number and the number of nodes of the wave function) and
5 0 1
< < < ! ! M A T H _ B L O C K
5 0 2
< < < ! ! M A T H _ B L O C K
5 0 3
< < < ! ! M A T H _ B L O C K
but $s_z$ and $l_z$ do not result in good quantum numbers in a basis where we use the angular momentum $j$.
For a two-body state where we couple two angular momenta $j_1$ and $j_2$ to a final angular momentum $J$ with projection $M_J$, we can define a similar transformation in terms of the Clebsch-Gordan coeffficients
We will write these functions in a more compact form hereafter, namely,
and
where we have skipped the explicit reference to $l$, $s$ and $n$. The spin of a nucleon is always $1/2$ while the value of $l$ can be deduced from the parity of the state. It is thus normal to label a state with a given total angular momentum as $j^{\pi}$, where $\pi=\pm 1$.
Our two-body state can thus be written as
Due to the coupling order of the Clebsch-Gordan coefficient it reads as $j_1$ coupled to $j_2$ to yield a final angular momentum $J$. If we invert the order of coupling we would have
and due to the symmetry properties of the Clebsch-Gordan coefficient we have
We call the basis $|(j_1j_2)JM_J\rangle$ for the coupled basis, or just $j$-coupled basis/scheme. The basis formed by the simple product of single-particle eigenstates $|j_1m_{j_1}\rangle|j_2m_{j_2}\rangle$ is called the uncoupled-basis, or just the $m$-scheme basis.
We have thus the coupled basis
and the uncoupled basis
The latter can easily be generalized to many single-particle states whereas the first needs specific coupling coefficients and definitions of coupling orders. The $m$-scheme basis is easy to implement numerically and is used in most standard shell-model codes. Our coupled basis obeys also the following relations
5 1 3
< < < ! ! M A T H _ B L O C K
The nuclear forces are almost charge independent. If we assume they are, we can introduce a new quantum number which is conserved. For nucleons only, that is a proton and neutron, we can limit ourselves to two possible values which allow us to distinguish between the two particles. If we assign an isospin value of $\tau=1/2$ for protons and neutrons (they belong to an isospin doublet, in the same way as we discussed the spin $1/2$ multiplet), we can define the neutron to have isospin projection $\tau_z=+1/2$ and a proton to have $\tau_z=-1/2$. These assignements are the standard choices in low-energy nuclear physics.
This leads to the introduction of an additional quantum number called isospin. We can define a single-nucleon state function in terms of the quantum numbers $n$, $j$, $m_j$, $l$, $s$, $\tau$ and $\tau_z$. Using our definitions in terms of an uncoupled basis, we had
which we can now extend to
with the isospin spinors defined as
and
We can then define the proton state function as
and similarly for neutrons as
We can in turn define the isospin Pauli matrices (in the same as we define the spin matrices) as
5 2 1
< < < ! ! M A T H _ B L O C K
and
and operating with $\hat{\tau}_z$ on the proton state function we have
and for neutrons we have
We can now define the so-called charge operator as
which results in
and
as it should be.
The total isospin is defined as
and its corresponding isospin projection as
with eigenvalues $T(T+1)$ for $\hat{T}$ and $1/2(N-Z)$ for $\hat{T}_z$, where $N$ is the number of neutrons and $Z$ the number of protons.
If charge is conserved, the Hamiltonian $\hat{H}$ commutes with $\hat{T}_z$ and all members of a given isospin multiplet (that is the same value of $T$) have the same energy and there is no $T_z$ dependence and we say that $\hat{H}$ is a scalar in isospin space.
Till now we have not said anything about the explicit calculation of two-body matrix elements. It is time to amend this deficiency. We have till now seen the following definitions of a two-body matrix elements. In $m$-scheme with quantum numbers $p=j_pm_p$ etc we have a two-body state defined as
where $|\Phi_0\rangle$ is a chosen reference state, say for example the Slater determinant which approximates ${}^{16}\mbox{O}$ with the $0s$ and the $0p$ shells being filled, and $M=m_p+m_q$. Recall that we label single-particle states above the Fermi level as $abcd\dots$ and states below the Fermi level for $ijkl\dots$.
In case of two-particles in the single-particle states $a$ and $b$ outside ${}^{16}\mbox{O}$ as a closed shell core, say ${}^{18}\mbox{O}$,
we would write the representation of the Slater determinant as
In case of two-particles removed from say ${}^{16}\mbox{O}$, for example two neutrons in the single-particle states $i$ and $j$, we would write this as
For a one-hole-one-particle state we have
and finally for a two-particle-two-hole state we
Let us go back to the case of two-particles in the single-particle states $a$ and $b$ outside ${}^{16}\mbox{O}$ as a closed shell core, say ${}^{18}\mbox{O}$. The representation of the Slater determinant is
The anti-symmetrized matrix element is detailed as
and note that anti-symmetrization means
5 3 8
< < < ! ! M A T H _ B L O C K
This matrix element is the expectation value of
We have also defined matrix elements in the coupled basis, the so-called $J$-coupled scheme. In this case the two-body wave function for two neutrons outside ${}^{16}\mbox{O}$ is written as
with
We have now an explicit coupling order, where the angular momentum $j_a$ is coupled to the angular momentum $j_b$ to yield a final two-body angular momentum $J$. The normalization factor (to be derived below) is
The implementation of the Pauli principle looks different in the $J$-scheme compared with the $m$-scheme. In the latter, no two fermions or more can have the same set of quantum numbers. In the $J$-scheme, when we write a state with the shorthand
we do refer to the angular momenta only. This means that another way of writing the last state is
We will use this notation throughout when we refer to a two-body state in $J$-scheme. The Kronecker $\delta$ function in the normalization factor refers thus to the values of $j_a$ and $j_b$. If two identical particles are in a state with the same $j$-value, then only even values of the total angular momentum apply.
Note also that, using the anti-commuting properties of the creation operators, we obtain
Furthermore, using the property of the Clebsch-Gordan coefficient
which can be used to show that
is equal to
This relation is important since we will need it when using anti-symmetrized matrix elements in $J$-scheme.
The two-body matrix element is a scalar and since it obeys rotational symmetry, it is diagonal in $J$, meaning that the corresponding matrix element in $J$-scheme is
5 5 0
< < < ! ! M A T H _ B L O C K
and note that of the four $m$-values in the above sum, only three are independent due to the constraint $m_a+m_b=M=m_c+m_d$. Since
the anti-symmetrized matrix elements need now to obey the following relations
5 5 3
< < < ! ! M A T H _ B L O C K
5 5 4
< < < ! ! M A T H _ B L O C K
where the last relations follows from the fact that $J$ is an integer and $2J$ is always an even number.
Using the orthogonality properties of the Clebsch-Gordan coefficients,
and
we can also express the two-body matrix element in $m$-scheme in terms of that in $J$-scheme, that is, if we multiply with
from left in
5 5 9
< < < ! ! M A T H _ B L O C K
we obtain
we obtain
5 6 1
< < < ! ! M A T H _ B L O C K
Chadwick (1932) discovers the neutron and Heisenberg (1932) proposes the first Phenomenology (Isospin).
Yukawa (1935) and his Meson Hypothesis
Discovery of the pion in cosmic ray (1947) and in the Berkeley Cyclotron Lab (1948).
Nobelprize awarded to Yukawa (1949). Rabi (1948) measures quadrupole moment of the deuteron.
Taketani, Nakamura, Sasaki (1951): 3 ranges. One-Pion-Exchange (OPE): o.k.
Multi-pion exchanges: Problems! Taketani, Machida, Onuma (1952);
Pion Theories Brueckner, Watson (1953).
Many pions = multi-pion resonances: $\sigma(600)$, $\rho(770)$, $\omega(782)$ etc. One-Boson-Exchange Model.
Refined Meson Theories
Sophisticated models for two-pion exchange:
* Paris Potential (Lacombe et al., Phys. Rev. C **21**, 861 (1980))
* Bonn potential (Machleidt et al., Phys. Rep. **149**, 1 (1987))
Quark cluster models. Begin of effective field theory studies.
1990's
1993-2001: High-precision NN potentials: Nijmegen I, II, '93, Reid93 (Stoks et al. 1994),
Argonne V18 (Wiringa et al, 1995), CD-Bonn (Machleidt et al. 1996 and 2001.
Advances in effective field theory: Weinberg (1990); Ordonez, Ray, van Kolck and many more.
3rd Millenium
Nucleon-nucleon interaction from Lattice QCD, final confirmation of meson hypothesis of Yukawa? See for example Ishii et al, PRL 2007
The aim is to give you an overview over central features of the nucleon-nucleon interaction and how it is constructed, with both technical and theoretical approaches.
The existence of the deuteron with $J^{\pi}=1^+$ indicates that the force between protons and neutrons is attractive at least for the $^3S_1$ partial wave. Interference between Coulomb and nuclear scattering for the proton-proton partial wave $^1S_0$ shows that the NN force is attractive at least for the $^1S_0$ partial wave.
It has a short range and strong intermediate attraction.
Spin dependent, scattering lengths for triplet and singlet states are different,
Spin-orbit force. Observation of large polarizations of scattered nucleons perpendicular to the plane of scattering.
Strongly repulsive core. The $s$-wave phase shift becomes negative at $\approx 250$ MeV implying that the singlet $S$ has a hard core with range $0.4-0.5$ fm.
Charge independence (almost). Two nucleons in a given two-body state always (almost) experience the same force. Modern interactions break charge and isospin symmetry lightly. That means that the pp, neutron-neutron and pn parts of the interaction will be different for the same quantum numbers.
Non-central. There is a tensor force. First indications from the quadrupole moment of the deuteron pointing to an admixture in the ground state of both $l=2$ ($^3D_1$) and $l=0$ ($^3S_1$) orbital momenta.
Comparison of the binding energies of ${}^2\mbox{H}$ (deuteron), ${}^3\mbox{H}$ (triton), ${}^4\mbox{He}$ (alpha - particle) show that the nuclear force is of finite range ($1-2$ fm) and very strong within that range.
For nuclei with $A>4$, the energy saturates: Volume and binding energies of nuclei are proportional to the mass number $A$ (as we saw from exercise 1).
Nuclei are also bound. The average distance between nucleons in nuclei is about $2$ fm which must roughly correspond to the range of the attractive part.
After correcting for the electromagnetic interaction, the forces between nucleons (pp, nn, or np) in the same state are almost the same.
Almost the same: Charge-independence is slightly broken.
Equality between the pp and nn forces: Charge symmetry.
Equality between pp/nn force and np force: Charge independence.
Better notation: Isospin symmetry, invariance under rotations in isospin
Charge-symmetry breaking (CSB), after electromagnetic effects have been removed:
$a_{pp}= -17.3 \pm 0.4 \hspace{0.cm} \mathrm{fm}$
$a_{nn}=-18.8 \pm 0.5 \hspace{0.cm} \mathrm{fm}$. Note however discrepancy from $nd$ breakup reactions resulting in $a_{nn}=-18.72 \pm 0.13 \pm 0.65 \hspace{0.cm} \mathrm{fm}$ and $\pi^- + d \rightarrow \gamma + 2n$ reactions giving $a_{nn}=-18.93 \pm 0.27 \pm 0.3 \hspace{0.cm} \mathrm{fm}$.
Charge-independence breaking (CIB)
Translation invariance
Galilean invariance
Rotation invariance in space
Space reflection invariance
Time reversal invariance
Invariance under the interchange of particle $1$ and $2$
Almost isospin symmetry
Here we display a typical way to parametrize (non-relativistic expression) the nuclear two-body force in terms of some operators, the central part, the spin-spin part and the central force.
5 6 3
< < < ! ! M A T H _ B L O C K
How do we derive such terms? (Note: no isospin dependence and that the above is an approximation)
To derive the above famous form of the nuclear force using field theoretical concepts, we will need some elements from relativistic quantum mechanics. These derivations will be given below. The material here gives some background to this. I know that many of you have not taken a course in quantum field theory. I hope however that you can see the basic ideas leading to the famous non-relativistic expressions for the nuclear force.
Furthermore, when we analyze nuclear data, we will actually try to explain properties like spectra, single-particle energies etc in terms of the various terms of the nuclear force. Moreover, many of you will hear about these terms at various talks, workshops, seminars etc. Then, it is good to have an idea of what people actually mean!!
Baryons | Mass (MeV) | Mesons | Mass (MeV) |
---|---|---|---|
$p,n$ | 938.926 | $\pi$ | 138.03 |
$\Lambda$ | 1116.0 | $\eta$ | 548.8 |
$\Sigma$ | 1197.3 | $\sigma$ | $\approx 550.0$ |
$\Delta$ | 1232.0 | $\rho$ | 770 |
$\omega$ | 782.6 | ||
$\delta$ | 983.0 | ||
$K$ | 495.8 | ||
$K^{\star}$ | 895.0 |
But before we proceed, we will look into specific quantum numbers of the relative system and study expectation vaues of the various terms of
5 6 5
< < < ! ! M A T H _ B L O C K
When solving the scattering equation or solving the two-nucleon problem, it is convenient to rewrite the Schroedinger equation, due to the spherical symmetry of the Hamiltonian, in relative and center-of-mass coordinates. This will also define the quantum numbers of the relative and center-of-mass system and will aid us later in solving the so-called Lippman-Schwinger equation for the scattering problem.
We define the center-of-mass (CoM) momentum as
with $\hbar=c=1$ the wave number $k_i=p_i$, with $p_i$ the pertinent momentum of a single-particle state. We have also the relative momentum
We will below skip the indices $ij$ and simply write $\mathbf{k}$
In a similar fashion we can define the CoM coordinate
and the relative distance
With the definitions
and
we can rewrite the two-particle kinetic energy (note that we use $\hbar=c=1$ as
where $m_n$ is the average of the proton and the neutron masses.
Since the two-nucleon interaction depends only on the relative distance, this means that we can separate Schroedinger's equation in an equation for the center-of-mass motion and one for the relative motion.
With an equation for the relative motion only and a separate one for the center-of-mass motion we need to redefine the two-body quantum numbers.
Previously we had a two-body state vector defined as $|(j_1j_2)JM_J\rangle$ in a coupled basis. We will now define the quantum numbers for the relative motion. Here we need to define new orbital momenta (since these are the quantum numbers which change). We define
where $\hat{l}$ is the orbital momentum associated with the relative motion and $\hat{L}$ the corresponding one linked with the CoM. The total spin $S$ is unchanged since it acts in a different space. We have thus that
which allows us to define the angular momentum of the relative motion
where ${ \cal J}$ is the total angular momentum of the relative motion.
The total two-nucleon state function has to be anti-symmetric. The total function contains a spatial part, a spin part and an isospin part. If isospin is conserved, this leads to in case we have an $s$-wave with spin $S=0$ to an isospin two-body state with $T=1$ since the spatial part is symmetric and the spin part is anti-symmetric.
Since the projections for $T$ are $T_z=-1,0,1$, we can have a $pp$, an $nn$ and a $pn$ state.
For $l=0$ and $S=1$, a so-called triplet state, $^3S_1$, we must have $T=0$, meaning that we have only one state, a $pn$ state. For other partial waves, the following table lists states up to $f$ waves. We can systemize this in a table as follows, recalling that $|\mathbf{l}-\mathbf{S}| \le |\mathbf{J}| \le |\mathbf{l}+\mathbf{S}|$,
$^{2S+1}l_J$ | $J$ | $l$ | $S$ | $T$ | $\vert pp\rangle$ | $\vert pn\rangle$ | $\vert nn\rangle$ |
---|---|---|---|---|---|---|---|
$^{1}S_0$ | 0 | 0 | 0 | 1 | yes | yes | yes |
$^{3}S_1$ | 1 | 0 | 1 | 0 | no | yes | no |
$^{3}P_0$ | 0 | 1 | 1 | 1 | yes | yes | yes |
$^{1}P_1$ | 1 | 1 | 0 | 0 | no | yes | no |
$^{3}P_1$ | 1 | 1 | 1 | 1 | yes | yes | yes |
$^{3}P_2$ | 2 | 1 | 1 | 1 | yes | yes | yes |
$^{3}D_1$ | 1 | 2 | 1 | 0 | no | yes | no |
$^{3}F_2$ | 2 | 3 | 1 | 1 | yes | yes | yes |
The tensor force is given by
where the Pauli matrices are defined as
5 7 8
< < < ! ! M A T H _ B L O C K
and
with the properties $\sigma = 2\mathbf{S}$ (the spin of the system, being $1/2$ for nucleons), $\sigma^2_x=\sigma^2_y=\sigma_z=\mathbf{1}$ and obeying the commutation and anti-commutation relations $\{\sigma_x,\sigma_y\} =0$ $[\sigma_x,\sigma_y] =\imath\sigma_z$ etc.
When we look at the expectation value of $\langle \mathbf{\sigma}_1\cdot\mathbf{\sigma}_2\rangle$, we can rewrite this expression in terms of the spin $\mathbf{S}=\mathbf{s}_1+\mathbf{s}_2$, resulting in
where we $s_1=s_2=1/2$ leading to
Similarly, the expectation value of the spin-orbit term is
which means that for $s$-waves with either $S=0$ and thereby $J=0$ or $S=1$ and $J=1$, the expectation value for the spin-orbit force is zero. With the above phenomenological model, the only contributions to the expectation value of the potential energy for $s$-waves stem from the central and the spin-spin components since the expectation value of the tensor force is also zero.
For $s=1/2$ spin values only for two nucleons, the expectation value of the tensor force operator is
$l'$ </thead>
$l$ $J+1$ $J$ $J-1$
$J+1$ $-\frac{2J(J+2)}{2J+1}$ 0 $\frac{6\sqrt{J(J+1)}}{2J+1}$
$J$ 0 2 0
$J-1$ $\frac{6\sqrt{J(J+1)}}{2J+1}$ 0 $-\frac{2(2J+1)}{2J+1}$
</tbody> </table> We will derive these expressions after we have discussed the Wigner-Eckart theorem.
If we now add isospin to our simple $V_4$ interaction model, we end up with $8$ operators, popularly dubbed $V_8$ interaction model. The explicit form reads
5 8 4
< < < ! ! M A T H _ B L O C K
5 8 5
< < < ! ! M A T H _ B L O C K
5 8 6
< < < ! ! M A T H _ B L O C K
From 1950 till approximately 2000: One-Boson-Exchange (OBE) models dominate. These are models which typically include several low-mass mesons, that is with masses below 1 GeV. Potentials which are based upon the standard non-relativistic operator structure are called "Phenomenological Potentials" Some historically important examples are
Gammel-Thaler potential ( Phys. Rev. 107, 291, 1339 (1957) and the
Hamada-Johnston potential, Nucl. Phys. 34, 382 (1962)), both with a hard core.
Reid potential (Ann. Phys. (N.Y.) 50, 411 (1968)), soft core.
Argonne $V_{14}$ potential (Wiringa et al., Phys. Rev. C 29, 1207 (1984)) with 14 operators and the Argonne $V_{18}$ potential (Wiringa et al., Phys. Rev. C 51, 38 (1995)), uses 18 operators
A good historical reference: R. Machleidt, Adv. Nucl. Phys. 19, 189 (1989).
Now: models based on chiral perturbation theory. These are effective models with nucleons and pions as degrees of freedom only. The other mesons which appeared in standard one-boson model appear as multi-pion resonances.
The total two-nucleon state function has to be anti-symmetric. The total function contains a spatial part, a spin part and an isospin part. If isospin is conserved, this leads to in case we have an $s$-wave with spin $S=0$ to an isospin two-body state with $T=1$ since the spatial part is symmetric and the spin part is anti-symmetric.
Since the projections for $T$ are $T_z=-1,0,1$, we can have a $pp$, an $nn$ and a $pn$ state.
For $l=0$ and $S=1$, a so-called triplet state, $^3S_1$, we must have $T=0$, meaning that we have only one state, a $pn$ state. For other partial waves, see exercises below.
The one-pion exchange contribution (see derivation below), can be written as
Here the constant $f_{\pi}^{2}/4\pi\approx 0.08$ and the mass of the pion is $m_\pi\approx 140$ MeV/$\mbox{c}^2$.
Let us look closer at specific partial waves for which one-pion exchange is applicable. If we have $S=0$ and $T=0$, the orbital momentum has to be an odd number in order for the total anti-symmetry to be obeyed. For $S=0$, the tensor force component is zero, meaning that the only contribution is
since $\langle\mathbf{ \sigma}_1\cdot\mathbf{ \sigma}_2\rangle=-3$, that is we obtain a repulsive contribution to partial waves like $^1P_0$.
Since $S=0$ yields always a zero tensor force contribution, for the combination of $T=1$ and then even $l$ values, we get an attractive contribution
With $S=1$ and $T=0$, $l$ can only take even values in order to obey the anti-symmetry requirements and we get
while for $S=1$ and $T=1$, $l$ can only take odd values, resulting in a repulsive contribution
The central part of one-pion exchange interaction, arising from the spin-spin term,
is thus attractive for $s$-waves and all even $l$ values. For $p$-waves and all other odd values
it is repulsive. However, its overall strength is weak. This is discussed further in one of exercises below.
To describe the interaction between the various baryons and mesons of the previous table we choose the following phenomenological lagrangians for spin $1/2$ baryons
5 9 3
< < < ! ! M A T H _ B L O C K
and
for pseudoscalar (ps), scalar (s) and vector (v) coupling, respectively. The factors $g^{v}$ and $g^{t}$ are the vector and tensor coupling constants, respectively.
For spin $1/2$ baryons, the fields $\Psi$ are expanded in terms of the Dirac spinors (positive energy solution shown here with $\overline{u}u=1$)
with $\chi$ the familiar Pauli spinor and $E(k) =\sqrt{m^2 +|\mathbf{k}|^2}$. The positive energy part of the field $\Psi$ reads
with $a$ being a fermion annihilation operator.
Expanding the free Dirac spinors in terms of $1/m$ ($m$ is here the mass of the relevant baryon) results, to lowest order, in the familiar non-relativistic expressions for baryon-baryon potentials. The configuration space version of the interaction can be approximated as
5 9 8
< < < ! ! M A T H _ B L O C K
where $m_{\alpha}$ is the mass of the relevant meson and $S_{12}$ is the familiar tensor term.
We derive now the non-relativistic one-pion exchange interaction.
Here $p_{1}$, $p_{1}'$, $p_{2}$, $p_{2}'$ and $k=p_{1}-p_{1}'$ denote
four-momenta.
The vertices are
given by the pseudovector Lagrangian
From the Feynman diagram rules we can write the two-body interaction as
The factors $p_{1}-p_{1}'=p_{2}'-p_{2}$ are both the four-momentum of the exchanged meson and come from the derivative of the meson field in the interaction Lagrangian. The Dirac spinors obey
Using these relations, together with $\{\gamma_{5},\gamma_{\mu}\}=0$, we find
and
We get
By inserting expressions for the Dirac spinors, we find
Similarly
In the CM system we have $\mathbf{p}_{2}=-\mathbf{p}_{1}$, $\mathbf{p'}_{2}=
-\mathbf{p'}_{1}$ and so $E_{2}=E_{1}$, $E_{2}'=E_{1}'$.
We can then write down the relativistic contribution
to the NN potential in the CM system:
In the non-relativistic limit we have to lowest order
and then $(p_{1}-p_{1}')^{2}=-\mathbf{k}^{2}$, so we get for the contribution to the NN potential
We have omitted exchange terms and the isospin term $\mathbf{\tau}_1\cdot\mathbf{\tau}_2$.
We have
In coordinate space we have
resulting in
We obtain
Carrying out the differentation of
we arrive at the famous one-pion exchange potential with central and tensor parts
For the full potential add the exchange part and the $\mathbf{\tau}_1\cdot\mathbf{\tau}_2$ term as well. (Subtle point: there is a divergence which gets cancelled by using cutoffs) This leads to coefficients $C_{\sigma}$ and $C_T$ which are fitted to data.
When we perform similar non-relativistic expansions for scalar and vector mesons we obtain for the $\sigma$ meson
We note an attractive central force and spin-orbit force. This term has an intermediate range. We have defined $1/2(p_{1}+p_{1}')=\mathbf{q}$. For the full potential add the exchange part and the isospin dependence as well.
We obtain for the $\omega$ meson
We note a repulsive central force and an attractive spin-orbit force. This term has short range. For the full potential add the exchange part and the isospin dependence as well.
Finally for the $\rho$ meson
We note a tensor force with sign opposite to that of the pion. This term has short range. For the full potential add the exchange part and the isospin dependence as well.
Can use a one-boson exchange picture to construct a nucleon-nucleon interaction a la QED
Non-relativistic approximation yields amongst other things a spin-orbit force which is much stronger than in atoms.
At large intermediate distances pion exchange dominates while pion resonances (other mesons) dominate at intermediate and short range
What follows now is a more technical discussion on how we can solve the two-nucleon problem. This will lead us to the so-called Lippman-Schwinger equation for the scattering problem and a rewrite of Schroedinger's equation in relative and center-of-mass coordinates.
Before we break down the Schroedinger equation into a partial wave decomposition, we derive now the so-called Lippman-Schwinger equation. We will do this in an operator form first. Thereafter, we rewrite it in terms of various quantum numbers such as relative momenta, orbital momenta etc. The Schroedinger equation in abstract vector representation is
In our case for the two-body problem $\hat{H}_0$ is just the kinetic energy. We rewrite it as
We assume that the invers of $\left( \hat{H}_0 -E_n\right)$ exists and rewrite this equation as
The equation
is normally solved in an iterative fashion. We assume first that
where $\vert\phi_n \rangle$ are the eigenfunctions of
the so-called unperturbed problem. In our case, these will simply be the kinetic energies of the relative motion.
Inserting $\vert\phi_n \rangle$ on the right-hand side of
yields
as our first iteration. Reinserting again gives
and continuing we obtain
It is easy to see that
can be rewritten as
which we rewrite as
In operator form we have thus
We multiply from the left with $\hat{V}$ and $\langle \phi_m \vert$ and obtain
We define thereafter the so-called $T$-matrix as
We can rewrite our equation as
The equation
is called the Lippman-Schwinger equation. Inserting the completeness relation
we have
which is (when we specify the state $\vert\phi_n \rangle$) an integral equation that can actually be solved by matrix inversion easily! The unknown quantity is the $T$-matrix.
Now we wish to introduce a partial wave decomposition in order to solve the Lippman-Schwinger equation. With a partial wave decomposition we can reduce a three-dimensional integral equation to a one-dimensional one.
Let us continue with our Schroedinger equation in the abstract vector representation
Here $T$ is the kinetic energy operator and $V$ is the potential operator. The eigenstates form a complete orthonormal set according to
The most commonly used representations are the coordinate and the momentum space representations. They define the completeness relations
Here the basis states in both $\mathbf{r}$- and $\mathbf{k}$-space are dirac-delta function normalized. From this it follows that the plane-wave states are given by,
which is a transformation function defining the mapping from the abstract $\vert\mathbf{k}\rangle$ to the abstract $\vert\mathbf{r}\rangle $ space.
That the $\mathbf{r}$-space basis states are delta-function normalized follows from
and the same for the momentum space basis states,
Projecting on momentum states, we obtain the momentum space Schroedinger equation as
Here the notation $\psi_n(\mathbf{k}) =\langle\mathbf{k}\vert\psi_n\rangle $ and $\langle\mathbf{k}\vert V\vert\mathbf{k}' \rangle =V(\mathbf{k}, \mathbf{k'})$ has been introduced. The potential in momentum space is given by a double Fourier-transform of the potential in coordinate space, i.e.
Here it is assumed that the potential interaction does not contain any spin dependence. Instead of a differential equation in coordinate space, the Schroedinger equation becomes an integral equation in momentum space. This has many tractable features. Firstly, most realistic nucleon-nucleon interactions derived from field-theory are given explicitly in momentum space. Secondly, the boundary conditions imposed on the differential equation in coordinate space are automatically built into the integral equation. And last, but not least, integral equations are easy to numerically implement, and convergence is obtained by just increasing the number of integration points. Instead of solving the three-dimensional integral equation, an infinite set of 1-dimensional equations can be obtained via a partial wave expansion.
The wave function $\psi_n(\mathbf{k})$ can be expanded in a complete set of spherical harmonics, that is
where the angular momentum projected potential takes the form,
here $d\hat{k} = d\theta\sin(\theta)d\varphi$. Note that we discuss only the orbital momentum, we will include angular momentum and spin later.
The potential is often given in position space. It is then convenient to establish the connection between $V_{lm, l'm'}(k,k')$ and $V_{lm, l'm'}(r,r')$. Inserting the completeness relation for the position quantum numbers in equation (112) results in
Since the plane waves depend only on the absolute values of position and momentum, $\vert\mathbf{k}\vert$ and $\vert\mathbf{r}\vert$, and the angle between them, $\theta_{kr}$, they may be expanded in terms of bipolar harmonics of zero rank, i.e.
where the addition theorem for spherical harmonics has been used in order to write
the expansion in terms of Legendre polynomials. The spherical Bessel functions, $j_l(z)$,
are given in terms of Bessel functions of the first kind with half integer orders,
Inserting the plane-wave expansion into the brackets of equation (113) yields,
The connection between the momentum- and position space angular momentum projected potentials are then given,
which is known as a double Fourier-Bessel transform. The position space angular momentum projected potential is given by
No assumptions of locality/non-locality and deformation of the interaction has so far been made,
and the result in equation (114) is general. In position space the Schroedinger equation
takes form of an integro-differential equation in case of a non-local interaction,
in momentum space the Schroedinger equation is an ordinary integral equation of the Fredholm type,
see equation (111). This is a further advantage of the momentum space approach as compared to
the standard position space approach.
If we assume that the
interaction is of local character, i.e.
then equation (115) reduces to
and equation (114) reduces to
where
In the case that the interaction is central, $V(\mathbf{r}) = V(r)$, then
and
where the momentum space representation of the interaction finally reads,
For a local and spherical symmetric potential, the coupled momentum space Schroedinger equations given in equation (111) decouples in angular momentum, giving
Where we have written $\psi_{n l }(k)=\psi_{nlm}(k)$, since the equation becomes independent of the projection $m$ for spherical symmetric interactions. The momentum space wave functions $\psi_{n l}(k)$ defines a complete orthogonal set of functions, which spans the space of functions with a positive finite Euclidean norm (also called $l^2$-norm), $\sqrt{\langle\psi_n\vert\psi_n\rangle}$, which is a Hilbert space. The corresponding normalized wave function in coordinate space is given by the Fourier-Bessel transform
We will thus assume that the interaction is spherically symmetric and use the partial wave expansion of the plane waves in terms of spherical harmonics. This means that we can separate the radial part of the wave function from its angular dependence. The wave function of the relative motion is described in terms of plane waves as
where $j_l$ is a spherical Bessel function and $Y_{lm}$ the spherical harmonics.
In terms of the relative and center-of-mass momenta $\mathbf{k}$ and $\mathbf{K}$, the potential in momentum space is related to the nonlocal operator $V(\mathbf{r},\mathbf{r}')$ by
We will assume that the interaction is spherically symmetric. Can separate the radial part of the wave function from its angular dependence. The wave function of the relative motion is described in terms of plane waves as
where $j_l$ is a spherical Bessel function and $Y_{lm}$ the spherical harmonic.
This partial wave basis is useful for defining the operator for the nucleon-nucleon interaction, which is symmetric with respect to rotations, parity and isospin transformations. These symmetries imply that the interaction is diagonal with respect to the quantum numbers of total relative angular momentum ${\cal J}$, spin $S$ and isospin $T$ (we skip isospin for the moment). Using the above plane wave expansion, and coupling to final ${\cal J}$ and $S$ and $T$ we get
6 6 9
< < < ! ! M A T H _ B L O C K
where we have defined
We have omitted the momentum of the center-of-mass motion $\mathbf{K}$ and the corresponding orbital momentum $L$, since the interaction is diagonal in these variables.
We wrote the Lippman-Schwinger equation as
How do we rewrite it in a partial wave expansion with momenta $k$?
The general structure of the $T$-matrix in partial waves is
6 7 3
< < < ! ! M A T H _ B L O C K
The shorthand notation
denotes the $T$-matrix with momenta $k$ and $k'$ and orbital momenta $l$ and $l'$ of the relative motion, and $K$ is the corresponding momentum of the center-of-mass motion. Further, $L$, ${\cal J}$, $S$ and $T$ are the orbital momentum of the center-of-mass motion, the total angular momentum, spin and isospin, respectively. Due to the nuclear tensor force, the interaction is not diagonal in $ll'$.
Using the orthogonality properties of the Clebsch-Gordan coefficients and the spherical harmonics, we obtain the well-known one-dimensional angle independent integral equation
Inserting the denominator we arrive at
To parameterize the nucleon-nucleon interaction we solve the Lippman-Scwhinger equation
The shorthand notation
denotes the $T(V)$-matrix with momenta $k$ and $k'$ and orbital momenta $l$ and $l'$ of the relative motion, and $K$ is the corresponding momentum of the center-of-mass motion. Further, $L$, ${\cal J}$, and $S$ are the orbital momentum of the center-of-mass motion, the total angular momentum and spin, respectively. We skip for the moment isospin.
For scattering states, the energy is positive, $E>0$. The Lippman-Schwinger equation (a rewrite of the Schroedinger equation) is an integral equation where we have to deal with the amplitude $R(k,k')$ (reaction matrix, which is the real part of the full complex $T$-matrix) defined through the integral equation for one partial wave (no coupled-channels)
For negative energies (bound states) and intermediate states scattering states blocked by occupied states below the Fermi level.
The symbol ${\cal P}$ in the previous slide indicates that Cauchy's principal-value prescription is used in order to avoid the singularity arising from the zero of the denominator.
The total kinetic energy of the two incoming particles in the center-of-mass system is
The matrix $R_l(k,k')$ relates to the the phase shifts through its diagonal elements as
From now on we will drop the subscript $l$ in all equations. In order to solve the Lippman-Schwinger equation in momentum space, we need first to write a function which sets up the mesh points. We need to do that since we are going to approximate an integral through
where we have fixed $N$ lattice points through the corresponding weights $w_i$ and points $x_i$. Typically obtained via methods like Gaussian quadrature.
If you use Gauss-Legendre the points are determined for the interval $x_i\in [-1,1]$ You map these points over to the limits in your integral. You can then use the following mapping
and
If you choose units fm$^{-1}$ for $k$, set $const=1$. If you choose to work with MeV, set $const\sim 200$ ($\hbar c=197$ MeVfm).
The principal value integral is rather tricky to evaluate numerically, mainly since computers have limited precision. We will here use a subtraction trick often used when dealing with singular integrals in numerical calculations. We introduce first the calculus relation
It means that the curve $1/(k-k_0)$ has equal and opposite areas on both sides of the singular point $k_0$. If we break the integral into one over positive $k$ and one over negative $k$, a change of variable $k\rightarrow -k$ allows us to rewrite the last equation as
We can then express a principal values integral as
where the right-hand side is no longer singular at $k=k_0$, it is proportional to the derivative $df/dk$, and can be evaluated numerically as any other integral.
We can then use this trick to obtain
This is the equation to solve numerically in order to calculate the phase shifts. We are interested in obtaining $R(k_0,k_0)$.
How do we proceed?
Using the mesh points $k_j$ and the weights $\omega_j$, we reach
This equation contains now the unknowns $R(k_i,k_j)$ (with dimension $N\times N$) and $R(k_0,k_0)$.
We can turn it into an equation with dimension $(N+1)\times (N+1)$ with a mesh which contains the original mesh points $k_j$ for $j=1,N$ and the point which corresponds to the energy $k_0$. Consider the latter as the 'observable' point. The mesh points become then $k_j$ for $j=1,n$ and $k_{N+1}=k_0$.
With these new mesh points we define the matrix
where $\delta$ is the Kronecker $\delta$ and
and
The first task is then to set up the matrix $A$ for a given $k_0$. This is an $(N+1)\times (N+1)$ matrix. It can be convenient to have an outer loop which runs over the chosen observable values for the energy $k_0^2/m$. {\em Note that all mesh points $k_j$ for $j=1,N$ must be different from $k_0$. Note also that $V(k_i,k_j)$ is an $(N+1)\times (N+1)$ matrix}.
With the matrix $A$ we can rewrite the problem as a matrix problem of dimension $(N+1)\times (N+1)$. All matrices $R$, $A$ and $V$ have this dimension and we get
or just
Since you already have defined $A$ and $V$ (these are stored as $(N+1)\times (N+1)$ matrices) The final equation involves only the unknown $R$. We obtain it by matrix inversion, i.e.,
Thus, to obtain $R$, you will need to set up the matrices $A$ and $V$ and invert the matrix $A$. With the inverse $A^{-1}$, perform a matrix multiplication with $V$ results in $R$.
With $R$ you can then evaluate the phase shifts by noting that
where $\delta$ are the phase shifts.
For elastic scattering, the scattering potential can only change the outgoing spherical wave function up to a phase. In the asymptotic limit, far away from the scattering potential, we get for the spherical bessel function
The outgoing wave will change by a phase shift $\delta_l$, from which we can define the S-matrix $S_l(k) = e^{2i\delta_l(k)}$. Thus, we have
The solution to the Schrodinger equation for a spherically symmetric potential, will have the form
where $f(\theta)$ is the scattering amplitude, and related to the differential cross section as
Using the expansion of a plane wave in spherical waves, we can relate the scattering amplitude $f(\theta)$ with the partial wave phase shifts $\delta_l$ by identifying the outgoing wave
which can be simplified further by cancelling $i^l$ with $e^{-il\pi/2}$
We have
with
where the partial wave scattering amplitude is given by
With Eulers formula for the cotangent, this can also be written as
Examples of negative and positive phase shifts for repulsive and attractive potentials, respectively.
The integrated cross section is given by [ \sigma = 2\pi \int_0^{\pi} |f(\theta)|^2 \sin \theta d\theta ] [ =2\pi \sum_l |\frac{(2l+1)}{k} \sin(\delta_l)|^2 \int_0^{\pi} (P_l(\cos(\theta)))^2 \sin(\theta) d\theta] [ = \frac{4\pi}{k^2} \sum_l (2l+1) \sin^2\delta_l(k) = 4\pi \sum_l (2l+1)|f_l(\theta)|^2, ] where the orthogonality of the Legendre polynomials was used to evaluate the last integral [ \int_0^{\pi} P_l(\cos \theta)^2 \sin \theta d\theta = \frac{2}{2l+1}. ] Thus, the total cross section is the sum of the partial-wave cross sections. Note that the differential cross section contains cross-terms from different partial waves. The integral over the full sphere enables the use of the legendre orthogonality, and this kills the cross-terms.
At low energy, $k \rightarrow 0$, S-waves are most important. In this region we can define the scattering length $a$ and the effective range $r$. The $S-$wave scattering amplitude is given by [ f_l(\theta) = \frac{1}{k}\frac{1}{\cot \delta_l(k) - i}. ] Taking the limit $k \rightarrow 0$, gives us the expansion [ k \cot \delta_0 = -\frac{1}{a} + \frac{1}{2}r_0 k^2 + \ldots ] Thus the low energy cross section is given by [ \sigma = 4\pi a^2. ] If the system contains a bound state, the scattering length will become positive (neutron-proton in $^3S_1$). For the $^1S_0$ wave, the scattering length is negative and large. This indicates that the wave function of the system is at the verge of turning over to get a node, but cannot create a bound state in this wave.
Examples of scattering lengths.
It is important to realize that the phase shifts themselves are not observables. The measurable scattering quantity is the cross section, or the differential cross section. The partial wave phase shifts can be thought of as a parameterization of the (experimental) cross sections. The phase shifts provide insights into the physics of partial wave projected nuclear interactions, and are thus important quantities to know.
The nucleon-nucleon differential cross section have been measured at almost all energies up to the pion production threshold (290 MeV in the Lab frame), and this experimental data base is what provides us with the constraints on our nuclear interaction models. In order to pin down the unknown coupling constants of the theory, a statistical optimization with respect to cross sections need to be carried out. This is how we constrain the nucleon-nucleon interaction in practice!
Nijmegen phase shifts for selected partial waves.
The $pp$-data is more accurate than the $np$-data, and for $nn$ there is no data. The quality of a potential is gauged by the $\chi^2$/datum with respect to the scattering data base
$T_{\mathrm{lab}}$ bin (MeV) N3LO$^1$ NNLO$^2$ NLO$^2$ AV18$^3$ </thead>
0-100 1.05 1.7 4.5 0.95
100-190 1.08 22 100 1.10
190-290 1.15 47 180 1.11
$\mathbf{0-290}$ $\mathbf{1.10}$ $\mathbf{20}$ $\mathbf{86}$ $\mathbf{1.04}$ </tbody> </table>
R. Machleidt et al., Phys. Rev. C68, 041001(R) (2003)
E. Epelbaum et al., Eur. Phys. J. A19, 401 (2004)
R. B. Wiringa et al., Phys. Rev. C5, 38 (1995)
An example: chiral twobody interactions
R. Machleidt, D. R. Entem, Phys. Rep. 503, 1 (2011)
E. Epelbaum, H.-W. Hammer, Ulf-G. Mei\ss{}ner, Rev. Mod. Phys. 81, 1773 (2009)
Proton-neutron $^1S_0$ phase shift Note that the Nijm93 PWA phase shift becomes negative at T$_{\mathrm{lab}}> 250$MeV. This indicates that the nucleon-nucleon potential is repulsive at short distances
Proton-neutron $^1S_0$ phase shift.
Differential cross section
Proton-neutron $^1S_0$ phase shift.
List all allowed according to the Pauli principle partial waves with isospin $T$, their projection $T_z$, spin $S$, orbital angular momentum $l$ and total spin $J$ for $J\le 3$. Use the standard spectroscopic notation $^{2S+1}L_J$ to label different partial waves. A proton-proton state has $T_Z=-1$, a proton-neutron state has $T_z=0$ and a neutron-neutron state has $T_z=1$.
a) Find the closed form expression for the spin-orbit force. Show that the spin-orbit force {\bf LS} gives a zero contribution for $S$-waves (orbital angular momentum $l=0$). What is the value of the spin-orbit force for spin-singlet states ($S=0$)?
b) Find thereafter the expectation value of $\mathbf{\sigma}_1\cdot\mathbf{\sigma}_2$, where $\mathbf{\sigma}_i$ are so-called Pauli matrices.
A simple parametrization of the nucleon-nucleon force is given by what is called the $V_8$ potential model, where we have kept eight different operators. These operators contain a central force, a spin-orbit force, a spin-spin force and a tensor force. Several features of the nuclei can be explained in terms of these four components. Without the Pauli matrices for isospin the final form of such an interaction model results in the following form:
7 0 8
< < < ! ! M A T H _ B L O C K
where $m_{\alpha}$ is the mass of the relevant meson and $S_{12}$ is the familiar tensor term. The various coefficients $C_i$ are normally fitted so that the potential reproduces experimental scattering cross sections. By adding terms which include the isospin Pauli matrices results in an interaction model with eight operators.
The expectaction value of the tensor operator is non-zero only for $S=1$. We will show this in a forthcoming lecture, after that we have derived the Wigner-Eckart theorem. Here it suffices to know that the expectaction value of the tensor force for different partial values is (with $l$ the orbital angular momentum and ${\cal J}$ the total angular momentum in the relative and center-of-mass frame of motion)
7 1 0
< < < ! ! M A T H _ B L O C K
7 1 1
< < < ! ! M A T H _ B L O C K
7 1 2
< < < ! ! M A T H _ B L O C K
7 1 3
< < < ! ! M A T H _ B L O C K
and zero else.
In this exercise we will focus only on the one-pion exchange term of the nuclear force, namely
Here the constant $f_{\pi}^{2}/4\pi\approx 0.08$ and the mass of the pion is $m_\pi\approx 140$ MeV/c${}^{2}$.
a) Compute the expectation value of the tensor force and the spin-spin and isospin operators for the one-pion exchange potential for all partial waves you found in exercise 9. Comment your results. How does the one-pion exchange part behave as function of different $l$, ${\cal J}$ and $S$ values? Do you see some patterns?
b) For the binding energy of the deuteron, with the ground state defined by the quantum numbers $l=0$, $S=1$ and ${\cal J}=1$, the tensor force plays an important role due to the admixture from the $l=2$ state. Use the expectation values of the different operators of the one-pion exchange potential and plot the ratio of the tensor force component over the spin-spin component of the one-pion exchange part as function of $x=m_\pi r$ for the $l=2$ state (that is the case $l,l'={\cal J}+1$). Comment your results.
The simplest possible choice for many-body wavefunctions are product wavefunctions. That is
because we are really only good at thinking about one particle at a time. Such product wavefunctions, without correlations, are easy to work with; for example, if the single-particle states $\phi_i(x)$ are orthonormal, then the product wavefunctions are easy to orthonormalize.
Similarly, computing matrix elements of operators are relatively easy, because the integrals factorize.
The price we pay is the lack of correlations, which we must build up by using many, many product wavefunctions. (Thus we have a trade-off: compact representation of correlations but difficult integrals versus easy integrals but many states required.)
Because we have fermions, we are required to have antisymmetric wavefunctions, e.g.
etc. This is accomplished formally by using the determinantal formalism
Product wavefunction + antisymmetry = Slater determinant.
Properties of the determinant (interchange of any two rows or any two columns yields a change in sign; thus no two rows and no two columns can be the same) lead to the Pauli principle:
No two particles can be at the same place (two columns the same); and
No two particles can be in the same state (two rows the same).
As a practical matter, however, Slater determinants beyond $N=4$ quickly become unwieldy. Thus we turn to the occupation representation or second quantization to simplify calculations.
The occupation representation, using fermion creation and annihilation operators, is compact and efficient. It is also abstract and, at first encounter, not easy to internalize. It is inspired by other operator formalism, such as the ladder operators for the harmonic oscillator or for angular momentum, but unlike those cases, the operators do not have coordinate space representations.
Instead, one can think of fermion creation/annihilation operators as a game of symbols that compactly reproduces what one would do, albeit clumsily, with full coordinate-space Slater determinants.
We start with a set of orthonormal single-particle states $\{ \phi_i(x) \}$. (Note: this requirement, and others, can be relaxed, but leads to a more involved formalism.) Any orthonormal set will do.
To each single-particle state $\phi_i(x)$ we associate a creation operator $\hat{a}^\dagger_i$ and an annihilation operator $\hat{a}_i$.
When acting on the vacuum state $| 0 \rangle$, the creation operator $\hat{a}^\dagger_i$ causes a particle to occupy the single-particle state $\phi_i(x)$:
But with multiple creation operators we can occupy multiple states:
Now we impose antisymmetry, by having the fermion operators satisfy anticommutation relations:
so that
Because of this property, automatically $\hat{a}^\dagger_i \hat{a}^\dagger_i = 0$, enforcing the Pauli exclusion principle. Thus when writing a Slater determinant using creation operators,
where the index $i$ defines different single-particle states up to the Fermi level. We have assumed that we have $N$ fermions. A given one-particle-one-hole ($1p1h$) state can be written as
while a $2p2h$ state can be written as
and a general $NpNh$ state as
We can then expand our exact state function for the ground state as
where we have introduced the so-called correlation operator
Since the normalization of $\Psi_0$ is at our disposal and since $C_0$ is by hypothesis non-zero, we may arbitrarily set $C_0=1$ with corresponding proportional changes in all other coefficients. Using this so-called intermediate normalization we have
resulting in
We rewrite
in a more compact form as
where $H$ stands for $0,1,\dots,n$ hole states and $P$ for $0,1,\dots,n$ particle states. Our requirement of unit normalization gives
and the energy can be written as
is solved by diagonalization setting up the Hamiltonian matrix defined by the basis of all possible Slater determinants. A diagonalization is equivalent to finding the variational minimum of
where $\lambda$ is a variational multiplier to be identified with the energy of the system. The minimization process results in
7 3 9
< < < ! ! M A T H _ B L O C K
Since the coefficients $\delta[C_H^{*P}]$ and $\delta[C_{H'}^{P'}]$ are complex conjugates it is necessary and sufficient to require the quantities that multiply with $\delta[C_H^{*P}]$ to vanish.
This leads to
for all sets of $P$ and $H$.
If we then multiply by the corresponding $C_H^{*P}$ and sum over $PH$ we obtain
leading to the identification $\lambda = E$. This means that we have for all $PH$ sets
An alternative way to derive the last equation is to start from
and if this equation is successively projected against all $\Phi_H^P$ in the expansion of $\Psi$, then the last equation on the previous slide results. As stated previously, one solves this equation normally by diagonalization. If we are able to solve this equation exactly (that is numerically exactly) in a large Hilbert space (it will be truncated in terms of the number of single-particle states included in the definition of Slater determinants), it can then serve as a benchmark for other many-body methods which approximate the correlation operator $\hat{C}$.
For reasons to come (links with Coupled-Cluster theory and Many-Body perturbation theory), we will rewrite Eq. ( ref{eq:fullci}) as a set of coupled non-linear equations in terms of the unknown coefficients $C_H^P$.
To see this, we look at the contributions arising from
in Eq. (130), that is we multiply with $\langle \Phi_0 |$ from the left in
If we assume that we have a two-body operator at most, Slater's rule gives then an equation for the correlation energy in terms of $C_i^a$ and $C_{ij}^{ab}$ only. We get then
or
where the energy $E_0$ is the reference energy and $\Delta E$ defines the so-called correlation energy. The single-particle basis functions could be the results of a Hartree-Fock calculation or just the eigenstates of the non-interacting part of the Hamiltonian.
In our chapter on Hartree-Fock calculations, we have already computed the matrix $\langle \Phi_0 | \hat{H}|\Phi_{i}^{a}\rangle $ and $\langle \Phi_0 | \hat{H}|\Phi_{ij}^{ab}\rangle$. If we are using a Hartree-Fock basis, then the matrix elements $\langle \Phi_0 | \hat{H}|\Phi_{i}^{a}\rangle $ and we are left with a correlation energy given by
Inserting the various matrix elements we can rewrite the previous equation as
This equation determines the correlation energy but not the coefficients $C$. We need more equations. Our next step is to set up
as this equation will allow us to find an expression for the coefficents $C_i^a$ since we can rewrite this equation as
We rewrite this equation as
7 5 3
< < < ! ! M A T H _ B L O C K
Since these equations are solved iteratively ( that is we can start with a guess for the coefficients $C_i^a$), it is common to start the iteration by setting
and the denominator can be written as
The observant reader will however see that we need an equation for $C_{jk}^{bc}$ and $C_{jkl}^{bcd}$ as well. To find equations for these coefficients we need then to continue our multiplications from the left with the various $\Phi_{H}^P$ terms.
For $C_{jk}^{bc}$ we need then
7 5 7
< < < ! ! M A T H _ B L O C K
and we can isolate the coefficients $C_{kl}^{cd}$ in a similar way as we did for the coefficients $C_{i}^{a}$. At the end we can rewrite our solution of the Schr\"odinger equation in terms of $n$ coupled equations for the coefficients $C_H^P$. This is a very cumbersome way of solving the equation. However, by using this iterative scheme we can illustrate how we can compute the various terms in the wave operator or correlation operator $\hat{C}$. We will later identify the calculation of the various terms $C_H^P$ as parts of different many-body approximations to full CI. In particular, ww can relate this non-linear scheme with Coupled Cluster theory and many-body perturbation theory. These theories will not be discussed in this course.
If we use a Hartree-Fock basis, we simplify this equation
What about
and
7 6 1
< < < ! ! M A T H _ B L O C K
In project 1 the plan is to construct a working code that constructs the many-body Hamiltonian matrix in a basis of Slater determinants and to find the low-lying eigenenergies. This is referred to as the configuration-interaction method or shell-model diagonalization (or the interacting shell model).
The first step in such codes--and in your project--is to construct the many-body basis.
While the formalism is independent of the choice of basis, the effectiveness of a calculation will certainly be basis dependent.
Furthermore there are common conventions useful to know.
First, the single-particle basis has angular momentum as a good quantum number. You can imagine the single-particle wavefunctions being generated by a one-body Hamiltonian, for example a harmonic oscillator. Modifications include harmonic oscillator plus spin-orbit splitting, or self-consistent mean-field potentials, or the Woods-Saxon potential which mocks up the self-consistent mean-field.
For nuclei, the harmonic oscillator, modified by spin-orbit splitting, provides a useful language for describing single-particle states.
Each single-particle state is labeled by the following quantum numbers:
Orbital angular momentum $l$
Intrinsic spin $s$ = 1/2 for protons and neutrons
Angular momentum $j = l \pm 1/2$
$z$-component $j_z$ (or $m$)
Some labeling of the radial wavefunction, typically $n$ the number of nodes in the radial wavefunction, but in the case of harmonic oscillator one can also use the principal quantum number $N$, where the harmonic oscillator energy is $(N+3/2)\hbar \omega$.
In this format one labels states by $n(l)_j$, with $(l)$ replaced by a letter: $s$ for $l=0$, $p$ for $l=1$, $d$ for $l=2$, $f$ for $l=3$, and thenceforth alphabetical.
In practice the single-particle space has to be severely truncated. This truncation is typically based upon the single-particle energies, which is the effective energy from a mean-field potential.
Sometimes we freeze the core and only consider a valence space. For example, one may assume a frozen ${}^{4}\mbox{He}$ core, with two protons and two neutrons in the $0s_{1/2}$ shell, and then only allow active particles in the $0p_{1/2}$ and $0p_{3/2}$ orbits.
Another example is a frozen ${}^{16}\mbox{O}$ core, with eight protons and eight neutrons filling the $0s_{1/2}$, $0p_{1/2}$ and $0p_{3/2}$ orbits, with valence particles in the $0d_{5/2}, 1s_{1/2}$ and $0d_{3/2}$ orbits.
Sometimes we refer to nuclei by the valence space where their last nucleons go.
So, for example, we call ${}^{12}\mbox{C}$ a $p$-shell nucleus, while ${}^{26}\mbox{Al}$ is an
$sd$-shell nucleus and ${}^{56}\mbox{Fe}$ is a $pf$-shell nucleus.
There are different kinds of truncations.
For example, one can start with `filled' orbits (almost always the lowest), and then allow one, two, three... particles excited out of those filled orbits. These are called 1p-1h, 2p-2h, 3p-3h excitations.
Alternately, one can state a maximal orbit and allow all possible configurations with particles occupying states up to that maximum. This is called full configuration.
Finally, for particular use in nuclear physics, there is the energy truncation, also called the $N\hbar\Omega$ or $N_{max}$ truncation.
Here one works in a harmonic oscillator basis, with each major oscillator shell assigned a principal quantum number $N=0,1,2,3,...$.
The $N\hbar\Omega$ or $N_{max}$ truncation: Any configuration is given an noninteracting energy, which is the sum of the single-particle harmonic oscillator energies. (Thus this ignores spin-orbit splitting.)
Excited state are labeled relative to the lowest configuration by the number of harmonic oscillator quanta.
This truncation is useful because: if one includes all configuration up to some $N_{max}$, and has a translationally invariant interaction, then the intrinsic motion and the center-of-mass motion factor. In other words, we can know exactly the center-of-mass wavefunction.
In almost all cases, the many-body Hamiltonian is rotationally invariant. This means it commutes with the operators $\hat{J}^2, \hat{J}_z$ and so eigenstates will have good $J,M$. Furthermore, the eigenenergies do not depend upon the orientation $M$.
Therefore we can choose to construct a many-body basis which has fixed $M$; this is called an $M$-scheme basis.
Alternately, one can construct a many-body basis which has fixed $J$, or a $J$-scheme basis.
The Hamiltonian matrix will have smaller dimensions (a factor of 10 or more) in the $J$-scheme than in the $M$-scheme. On the other hand, as we'll show in the next slide, the $M$-scheme is very easy to construct with Slater determinants, while the $J$-scheme basis states, and thus the matrix elements, are more complicated, almost always being linear combinations of $M$-scheme states. $J$-scheme bases are important and useful, but we'll focus on the simpler $M$-scheme.
The quantum number $m$ is additive (because the underlying group is Abelian): if a Slater determinant $\hat{a}_i^\dagger \hat{a}^\dagger_j \hat{a}^\dagger_k \ldots | 0 \rangle$ is built from single-particle states all with good $m$, then the total
This is not true of $J$, because the angular momentum group SU(2) is not Abelian.
The upshot is that
It is easy to construct a Slater determinant with good total $M$;
It is trivial to calculate $M$ for each Slater determinant;
So it is easy to construct an $M$-scheme basis with fixed total $M$.
Note that the individual $M$-scheme basis states will not, in general, have good total $J$. Because the Hamiltonian is rotationally invariant, however, the eigenstates will have good $J$. (The situation is muddied when one has states of different $J$ that are nonetheless degenerate.)
Example: two $j=1/2$ orbits
Index | $n$ | $l$ | $j$ | $m_j$ |
---|---|---|---|---|
1 | 0 | 0 | 1/2 | -1/2 |
2 | 0 | 0 | 1/2 | 1/2 |
3 | 1 | 0 | 1/2 | -1/2 |
4 | 1 | 0 | 1/2 | 1/2 |
Occupied | $M$ |
---|---|
1,2 | 0 |
1,3 | -1 |
1,4 | 0 |
2,3 | 0 |
2,4 | 1 |
3,4 | 0 |
Index | $n$ | $l$ | $j$ | $m_j$ |
---|---|---|---|---|
1 | 0 | 2 | 5/2 | -5/2 |
2 | 0 | 2 | 5/2 | -3/2 |
3 | 0 | 2 | 5/2 | -1/2 |
4 | 0 | 2 | 5/2 | 1/2 |
5 | 0 | 2 | 5/2 | 3/2 |
6 | 0 | 2 | 5/2 | 5/2 |
Occupied | $M$ | Occupied | $M$ | Occupied | $M$ |
---|---|---|---|---|---|
1,2 | -4 | 2,3 | -2 | 3,5 | 1 |
1,3 | -3 | 2,4 | -1 | 3,6 | 2 |
1,4 | -2 | 2,5 | 0 | 4,5 | 2 |
1,5 | -1 | 2,6 | 1 | 4,6 | 3 |
1,6 | 0 | 3,4 | 0 | 5,6 | 4 |
The basic goal of the project is for you to build your own configuration-interaction shell-model code. The code will be fairly basic; it will assume that we have a single species of particles, e.g. only neutrons, and you could, if you wish to, read in uncoupled two-body matrix elements. Furthermore the pieces of the code will not be the most efficient. Nonetheless it will be usable; most importantly, you will gain a good idea of what goes into a many-body shell-model code.
The first step is to construct the $M$-scheme basis of Slater determinants. Here $M$-scheme means the total $J_z$ of the many-body states is fixed.
The steps could be:
Read in a user-supplied file of single-particle states (examples can be given) or just code these internally;
Ask for the total $M$ of the system and the number of particles $N$;
Construct all the $N$-particle states with given $M$. You will validate the code by comparing both the number of states and specific states.
The format of a possible input file could be
Index $n$ $l$ $2j$ $2m_j$ </thead>
1 1 0 1 -1
2 1 0 1 1
3 0 2 3 -3
4 0 2 3 -1
5 0 2 3 1
6 0 2 3 3
7 0 2 5 -5
8 0 2 5 -3
9 0 2 5 -1
10 0 2 5 1
11 0 2 5 3
12 0 2 5 5 </tbody> </table> This represents the $1s_{1/2}0d_{3/2}0d_{5/2}$ valence space, or just the $sd$-space. There are twelve single-particle states, labeled by an overall index, and which have associated quantum numbers the number of radial nodes, the orbital angular momentum $l$, and the angular momentum $j$ and third component $j_z$. To keep everything as integers, we could store $2 \times j$ and $2 \times j_z$.
To read in the single-particle states you need to:
Open the file
The next step is to read in the number of particles $N$ and the fixed total $M$ (or, actually, $2 \times M$). For this project we assume only a single species of particles, say neutrons, although this can be relaxed. Note: Although it is often a good idea to try to write a more general code, given the short time alloted we would suggest you keep your ambition in check, at least in the initial phases of the project.
You should probably write an error trap to make sure $N$ and $M$ are congruent; if $N$ is even, then $2 \times M$ should be even, and if $N$ is odd then $2\times M$ should be odd.
The final step is to generate the set of $N$-particle Slater determinants with fixed $M$. The Slater determinants will be stored in occupation representation. Although in many codes this representation is done compactly in bit notation with ones and zeros, but for greater transparency and simplicity we will list the occupied single particle states.
Hence we can store the Slater determinant basis states as $sd(i,j)$, that is an array of dimension $N_{SD}$, the number of Slater determinants, by $N$, the number of occupied state. So if for the 7th Slater determinant the 2nd, 3rd, and 9th single-particle states are occupied, then $sd(7,1) = 2$, $sd(7,2) = 3$, and $sd(7,3) = 9$.
We can construct an occupation representation of Slater determinants by the odometer method. Consider $N_{sp} = 12$ and $N=4$. Start with the first 4 states occupied, that is:
Now increase the last occupancy recursively:
$sd(2,:)= 1,2,3,5$
$sd(3,:)= 1,2,3,6$
$sd(4,:)= 1,2,3,7$
$\ldots$
$sd(9,:)= 1,2,3,12$
Then start over with
and again increase the rightmost digit
$sd(11,:)= 1,2,4,6$
$sd(12,:)= 1,2,4,7$
$\ldots$
$sd(17,:)= 1,2,4,12$
When we restrict ourselves to an $M$-scheme basis, we could choose two paths. The first is simplest (and simplest is often best, at least in the first draft of a code): generate all possible Slater determinants, and then extract from this initial list a list of those Slater determinants with a given $M$. (You will need to write a short function or routine that computes $M$ for any given occupation.)
Alternately, and not too difficult, is to run the odometer routine twice: each time, as as a Slater determinant is calculated, compute $M$, but do not store the Slater determinants except the current one. You can then count up the number of Slater determinants with a chosen $M$. Then allocated storage for the Slater determinants, and run the odometer algorithm again, this time storing Slater determinants with the desired $M$ (this can be done with a simple logical flag).
Some example solutions: Let's begin with a simple case, the $0d_{5/2}$ space containing six single-particle states
Index $n$ $l$ $j$ $m_j$ </thead>
1 0 2 5/2 -5/2
2 0 2 5/2 -3/2
3 0 2 5/2 -1/2
4 0 2 5/2 1/2
5 0 2 5/2 3/2
6 0 2 5/2 5/2 </tbody> </table> For two particles, there are a total of 15 states, which we list here with the total $M$:
$| 1,2 \rangle$, $M= -4$, $| 1,3 \rangle$, $M= -3$
$| 1,4 \rangle$, $M= -2$, $| 1,5 \rangle$, $M= -1$
$| 1,5 \rangle$, $M= 0$, $| 2,3 \rangle$, $M= -2$
$| 2,4 \rangle$, $M= -1$, $| 2,5 \rangle$, $M= 0$
$| 2,6 \rangle$, $M= 1$, $| 3,4 \rangle$, $M= 0$
$| 3,5 \rangle$, $M= 1$, $| 3,6 \rangle$, $M= 2$
$| 4,5 \rangle$, $M= 2$, $ | 4,6 \rangle$, $M= 3$
$| 5,6 \rangle$, $M= 4$
Of these, there are only 3 states with $M=0$.
You should try by hand to show that in this same single-particle space, that for $N=3$ there are 3 states with $M=1/2$ and for $N= 4$ there are also only 3 states with $M=0$.
To test your code, confirm the above.
Also, for the $sd$-space given above, for $N=2$ there are 14 states with $M=0$, for $N=3$ there are 37 states with $M=1/2$, for $N=4$ there are 81 states with $M=0$.
For our project, we will only consider the pairing model. A simple space is the $(1/2)^2$ space with four single-particle states
Index $n$ $l$ $s$ $m_s$ </thead>
1 0 0 1/2 -1/2
2 0 0 1/2 1/2
3 1 0 1/2 -1/2
4 1 0 1/2 1/2 </tbody> </table> For $N=2$ there are 4 states with $M=0$; show this by hand and confirm your code reproduces it.
Another, slightly more challenging space is the $(1/2)^4$ space, that is, with eight single-particle states we have
Index $n$ $l$ $s$ $m_s$ </thead>
1 0 0 1/2 -1/2
2 0 0 1/2 1/2
3 1 0 1/2 -1/2
4 1 0 1/2 1/2
5 2 0 1/2 -1/2
6 2 0 1/2 1/2
7 3 0 1/2 -1/2
8 3 0 1/2 1/2 </tbody> </table> For $N=2$ there are 16 states with $M=0$; for $N=3$ there are 24 states with $M=1/2$, and for $N=4$ there are 36 states with $M=0$.
In the shell-model context we can interpret this as 4 $s_{1/2}$ levels, with $m = \pm 1/2$, we can also think of these are simple four pairs, $\pm k, k = 1,2,3,4$. Later on we will assign single-particle energies, depending on the radial quantum number $n$, that is, $\epsilon_k = |k| \delta$ so that they are equally spaced.
For application in the pairing model we can go further and consider only states with no "broken pairs," that is, if $+k$ is filled (or $m = +1/2$, so is $-k$ ($m=-1/2$). If you want, you can write your code to accept only these, and obtain the following six states:
$| 1, 2 , 3 , 4 \rangle , $
$| 1 , 2 , 5 , 6 \rangle , $
$| 1 , 2 , 7 , 8 \rangle , $
$| 3 , 4 , 5 , 6 \rangle , $
$| 3 , 4 , 7 , 8 \rangle , $
$| 5 , 6 , 7 , 8 \rangle $
Write small modules (routines/functions) ; avoid big functions that do everything. (But not too small.)
Write lots of error traps, even for things that are `obvious.'
Document as you go along. For each function write a header that includes:
Main purpose of function
names and brief explanation of input variables, if any
names and brief explanation of output variables, if any
functions called by this function
called by which functions
Hints for coding
When debugging, print out intermediate values. It's almost impossible to debug a code by looking at it--the code will almost always win a `staring contest.'
Validate code with SIMPLE CASES. Validate early and often.
The number one mistake is using a too complex a system to test. For example , if you are computing particles in a potential in a box, try removing the potential--you should get particles in a box. And start with one particle, then two, then three... Don't start with eight particles.
Our recommended occupation representation, e.g. $| 1,2,4,8 \rangle$, is easy to code, but numerically inefficient when one has hundreds of millions of Slater determinants.
In state-of-the-art shell-model codes, one generally uses bit representation, i.e. $|1101000100... \rangle$ where one stores the Slater determinant as a single (or a small number of) integer.
This is much more compact, but more intricate to code with considerable more overhead. There exist bit-manipulation functions. This is left as a challenge for those of you who would like to study this topic further for the final project to be presented for the oral examination.
We consider a space with $2\Omega$ single-particle states, with each state labeled by $k = 1, 2, 3, \Omega$ and $m = \pm 1/2$. The convention is that the state with $k>0$ has $m = + 1/2$ while $-k$ has $m = -1/2$.
The Hamiltonian we consider is
where
and $\hat{P}_- = ( \hat{P}_+)^\dagger$.
This problem can be solved using what is called the quasi-spin formalism to obtain the exact results. Thereafter we will try again using the explicit Slater determinant formalism.
One can show (and this is part of the project) that
Now define
Finally you can show
This means the operators $\hat{P}_\pm, \hat{P}_z$ form a so-called $SU(2)$ algebra, and we can use all our insights about angular momentum, even though there is no actual angular momentum involved (this is similar to project 1).
So we rewrite the Hamiltonian to make this explicit:
Because of the SU(2) algebra, we know that the eigenvalues of $\hat{P}^2$ must be of the form $p(p+1)$, with $p$ either integer or half-integer, and the eigenvalues of $\hat{P}_z$ are $m_p$ with $p \geq | m_p|$, with $m_p$ also integer or half-integer.
But because $\hat{P}_z = (1/2)(\hat{N}-\Omega)$, we know that for $N$ particles the value $m_p = (N-\Omega)/2$. Furthermore, the values of $m_p$ range from $-\Omega/2$ (for $N=0$) to $+\Omega/2$ (for $N=2\Omega$, with all states filled).
We deduce the maximal $p = \Omega/2$ and for a given $n$ the values range of $p$ range from $|N-\Omega|/2$ to $\Omega/2$ in steps of 1 (for an even number of particles)
Following Racah we introduce the notation $p = (\Omega - v)/2$ where $v = 0, 2, 4,..., \Omega - |N-\Omega|$ With this it is easy to deduce that the eigenvalues of the pairing Hamiltonian are
This also works for $N$ odd, with $v= 1,3,5, \dots$.
Let's take a specific example: $\Omega = 3$ so there are 6 single-particle states, and $N = 3$, with $v= 1,3$. Therefore there are two distinct eigenvalues,
Now let's work this out explicitly. The single particle degrees of freedom are defined as
Index $k$ $m$ </thead>
1 1 -1/2
2 -1 1/2
3 2 -1/2
4 -2 1/2
5 3 -1/2
6 -3 1/2 </tbody> </table> There are $\left( \begin{array}{c}6 \\ 3 \end{array} \right) = 20$ three-particle states, but there are 9 states with $M = +1/2$, namely $| 1,2,3 \rangle, |1,2,5\rangle, | 1,4,6 \rangle, | 2,3,4 \rangle, |2,3,6 \rangle, | 2,4,5 \rangle, | 2, 5, 6 \rangle, |3,4,6 \rangle, | 4,5,6 \rangle$.
In this basis, the operator
From this we can determine that
so those states all have eigenvalue 0.
Now for further example,
so
The second term vanishes because state 3 is occupied twice, and reordering the last term we get
without picking up a phase.
Continuing in this fashion, with the previous ordering of the many-body states ( $| 1,2,3 \rangle, |1,2,5\rangle, | 1,4,6 \rangle, | 2,3,4 \rangle, |2,3,6 \rangle, | 2,4,5 \rangle, | 2, 5, 6 \rangle, |3,4,6 \rangle, | 4,5,6 \rangle$) the Hamiltonian matrix of this system is
This is useful for our project. One can by hand confirm that there are 3 eigenvalues $-2G$ and 6 with value zero.
Another example Using the $(1/2)^4$ single-particle space, resulting in eight single-particle states
Index $n$ $l$ $s$ $m_s$ </thead>
1 0 0 1/2 -1/2
2 0 0 1/2 1/2
3 1 0 1/2 -1/2
4 1 0 1/2 1/2
5 2 0 1/2 -1/2
6 2 0 1/2 1/2
7 3 0 1/2 -1/2
8 3 0 1/2 1/2 </tbody> </table> and then taking only 4-particle, $M=0$ states that have no `broken pairs', there are six basis Slater determinants:
$| 1, 2 , 3 , 4 \rangle , $
$| 1 , 2 , 5 , 6 \rangle , $
$| 1 , 2 , 7 , 8 \rangle , $
$| 3 , 4 , 5 , 6 \rangle , $
$| 3 , 4 , 7 , 8 \rangle , $
$| 5 , 6 , 7 , 8 \rangle $
Now we take the following Hamiltonian
where
and
We can write down the $ 6 \times 6$ Hamiltonian in the basis from the prior slide:
(You should check by hand that this is correct.)
For $\delta = 0$ we have the closed form solution of the g.s. energy given by $-6G$.
We need to define the so-called $6j$ and $9j$ symbols
The Wigner-Eckart theorem
We will also study some specific examples, like the calculation of the tensor force.
We define an irreducible spherical tensor $T^{\lambda}_{\mu}$ of rank $\lambda$ as an operator with $2\lambda+1$ components $\mu$ that satisfies the commutation relations ($\hbar=1$)
and
Our angular momentum coupled two-body wave function obeys clearly this definition, namely
is a tensor of rank $J$ with $M$ components. Another well-known example is given by the spherical harmonics (see examples during today's lecture).
The product of two irreducible tensor operators
is also a tensor operator of rank $\lambda_3$.
We wish to apply the above definitions to the computations of a matrix element
where we have skipped a reference to specific single-particle states. This is the expectation value for two specific states, labelled by angular momenta $J'$ and $J$. These states form an orthonormal basis. Using the properties of the Clebsch-Gordan coefficients we can write
and assuming that states with different $J$ and $M$ are orthonormal we arrive at
We need to show that
is independent of $M$. To show that
is independent of $M$, we use the ladder operators for angular momentum.
We have that
but this is also equal to
meaning that
The double bars indicate that this expectation value is independent of the projection $M$.
The Wigner-Eckart theorem for an expectation value can then be written as
The double bars indicate that this expectation value is independent of the projection $M$. We can manipulate the Clebsch-Gordan coefficients using the relations
and
together with the so-called $3j$ symbols. It is then normal to encounter the Wigner-Eckart theorem in the form
with the condition $\mu+M'-M=0$.
The $3j$ symbols obey the symmetry relation
with $(-1)^p=1$ when the columns $a,b, c$ are even permutations of the columns $1,2,3$, $p=j_1+j_2+j_3$ when the columns $a,b,c$ are odd permtations of the columns $1,2,3$ and $p=j_1+j_2+j_3$ when all the magnetic quantum numbers $m_i$ change sign. Their orthogonality is given by
and
For later use, the following special cases for the Clebsch-Gordan and $3j$ symbols are rather useful [ \langle JM J'M' |00\rangle =\frac{(-1)^{J-M}}{\sqrt{2J+1}}\delta{JJ'}\delta{MM'}. ] and [ \left(\begin{array}{ccc} J & 1 & J \\ -M & 0 & M'\end{array}\right)=(-1)^{J-M}\frac{M}{\sqrt{(2J+1)(J+1)}}\delta_{MM'}. ]
Using $3j$ symbols we rewrote the Wigner-Eckart theorem as
Multiplying from the left with the same $3j$ symbol and summing over $M,\mu,M'$ we obtain the equivalent relation
where we used the orthogonality properties of the $3j$ symbols from the previous page.
This relation can in turn be used to compute the expectation value of some simple reduced matrix elements like
where we used
Similarly, using
we have that
With the Pauli spin matrices $\sigma$ and a state with $J=1/2$, the reduced matrix element
Before we proceed with further examples, we need some other properties of the Wigner-Eckart theorem plus some additional angular momenta relations.
The Wigner-Eckart theorem states that the expectation value for an irreducible spherical tensor can be written as
Since the Clebsch-Gordan coefficients themselves are easy to evaluate, the interesting quantity is the reduced matrix element. Note also that the Clebsch-Gordan coefficients limit via the triangular relation among $\lambda$, $J$ and $J'$ the possible non-zero values.
From the theorem we see also that
meaning that if we know the matrix elements for say some $\mu=\mu_0$, $M'=M'_0$ and $M=M_0$ we can calculate all other.
If we look at the hermitian adjoint of the operator $T^{\lambda}_{\mu}$, we see via the commutation relations that $(T^{\lambda}_{\mu})^{\dagger}$ is not an irreducible tensor, that is
and
The hermitian adjoint $(T^{\lambda}_{\mu})^{\dagger}$ is not an irreducible tensor. As an example, consider the spherical harmonics for $l=1$ and $m_l=\pm 1$. These functions are
and
It is easy to see that the Hermitian adjoint of these two functions
and
do not behave as a spherical tensor. However, the modified quantity
does satisfy the above commutation relations.
With the modified quantity
we can then define the expectation value
since the Clebsch-Gordan coefficients are real. The rhs is equivalent with
which is equal to
We have till now seen the following definitions of a two-body matrix elements with quantum numbers $p=j_pm_p$ etc we have a two-body state defined as
where $|\Phi_0\rangle$ is a chosen reference state, say for example the Slater determinant which approximates
${}^{16}\mbox{O}$ with the $0s$ and the $0p$ shells being filled, and $M=m_p+m_q$. Recall that we label single-particle states above the Fermi level as $abcd\dots$ and states below the Fermi level for $ijkl\dots$.
In case of two-particles in the single-particle states $a$ and $b$ outside ${}^{16}\mbox{O}$ as a closed shell core, say ${}^{18}\mbox{O}$,
we would write the representation of the Slater determinant as
In case of two-particles removed from say ${}^{16}\mbox{O}$, for example two neutrons in the single-particle states $i$ and $j$, we would write this as
For a one-hole-one-particle state we have
and finally for a two-particle-two-hole state we
Let us go back to the case of two-particles in the single-particle states $a$ and $b$ outside ${}^{16}\mbox{O}$ as a closed shell core, say ${}^{18}\mbox{O}$. The representation of the Slater determinant is
The anti-symmetrized matrix element is detailed as
and note that anti-symmetrization means
8 2 8
< < < ! ! M A T H _ B L O C K
This matrix element is the expectation value of
We have also defined matrix elements in the coupled basis, the so-called $J$-coupled scheme. In this case the two-body wave function for two neutrons outside ${}^{16}\mbox{O}$ is written as
with
We have now an explicit coupling order, where the angular momentum $j_a$ is coupled to the angular momentum $j_b$ to yield a final two-body angular momentum $J$. The normalization factor is
The implementation of the Pauli principle looks different in the $J$-scheme compared with the $m$-scheme. In the latter, no two fermions or more can have the same set of quantum numbers. In the $J$-scheme, when we write a state with the shorthand
we do refer to the angular momenta only. This means that another way of writing the last state is
We will use this notation throughout when we refer to a two-body state in $J$-scheme. The Kronecker $\delta$ function in the normalization factor refers thus to the values of $j_a$ and $j_b$. If two identical particles are in a state with the same $j$-value, then only even values of the total angular momentum apply.
Note also that, using the anti-commuting properties of the creation operators, we obtain
Furthermore, using the property of the Clebsch-Gordan coefficient
which can be used to show that
is equal to
The two-body matrix element is a scalar and since it obeys rotational symmetry, it is diagonal in $J$, meaning that the corresponding matrix element in $J$-scheme is
8 4 0
< < < ! ! M A T H _ B L O C K
and note that of the four $m$-values in the above sum, only three are independent due to the constraint $m_a+m_b=M=m_c+m_d$.
Since
the anti-symmetrized matrix elements need now to obey the following relations
8 4 3
< < < ! ! M A T H _ B L O C K
8 4 4
< < < ! ! M A T H _ B L O C K
where the last relations follows from the fact that $J$ is an integer and $2J$ is always an even number.
Using the orthogonality properties of the Clebsch-Gordan coefficients,
and
we can also express the two-body matrix element in $m$-scheme in terms of that in $J$-scheme, that is, if we multiply with
from left in
8 4 9
< < < ! ! M A T H _ B L O C K
we obtain
8 5 1
< < < ! ! M A T H _ B L O C K
Let us now apply the theorem to some selected expectation values. In several of the expectation values we will meet when evaluating explicit matrix elements, we will have to deal with expectation values involving spherical harmonics. A general central interaction can be expanded in a complete set of functions like the Legendre polynomials, that is, we have an interaction, with $r_{ij}=|{\bf r}_i-{\bf r}_j|$,
with $P_{\nu}$ being a Legendre polynomials
We will come back later to how we split the above into a contribution that involves only one of the coordinates.
This means that we will need matrix elements of the type
We can rewrite the Wigner-Eckart theorem as
This equation is true for all values of $\theta$ and $\phi$. It must also hold for $\theta=0$.
We have
and for $\theta=0$, the spherical harmonic
which results in
Till now we have mainly been concerned with the coupling of two angular momenta $j_aj_b$to a final angular momentum $J$. If we wish to describe a three-body state with a final angular momentum $J$, we need to couple three angular momenta, say the two momenta $j_a,j_b$ to a third one $j_c$. The coupling order is important and leads to a less trivial implementation of the Pauli principle. With three angular momenta there are obviously $3!$ ways by which we can combine the angular momenta. In $m$-scheme a three-body Slater determinant is represented as (say for the case of $^{19}$O, three neutrons outside the core of $^{16}$O),
The Pauli principle is automagically implemented via the anti-commutation relations.
However, when we deal the same state in an angular momentum coupled basis, we need to be a little bit more careful. We can namely couple the states as follows
that is, we couple first $j_a$ to $j_b$ to yield an intermediate angular momentum $J_{ab}$, then to $j_c$ yielding the final angular momentum $J$.
Now, nothing hinders us from recoupling this state by coupling $j_b$ to $j_c$, yielding an intermediate angular momentum $J_{bc}$ and then couple this angular momentum to $j_a$, resulting in the final angular momentum $J'$.
That is, we can have
We will always assume that we work with orthornormal states, this means that when we compute the overlap betweem these two possible ways of coupling angular momenta, we get
We use then the latter equation to define the so-called $6j$-symbols
where the symbol in curly brackets $\{\}$ is the $6j$ symbol. A specific coupling order has to be respected in the symbol, that is, the so-called triangular relations between three angular momenta needs to be respected, that is
The $6j$ symbol is invariant under the permutation of any two columns
The $6j$ symbol is also invariant if upper and lower arguments are interchanged in any two columns
The $6j$ symbols satisfy this orthogonality relation
The symbol $\{j_1j_2j_3\}$ (called the triangular delta) is equal to one if the triad $(j_1j_2j_3)$ satisfies the triangular conditions and zero otherwise. A useful value is given when say one of the angular momenta are zero, say $J_{bc}=0$, then we have
With the $6j$ symbol defined, we can go back and and rewrite the overlap between the two ways of recoupling angular momenta in terms of the $6j$ symbol. That is, we can have
Can you find the inverse relation?
These relations can in turn be used to write out the fully anti-symmetrized three-body wave function in a $J$-scheme coupled basis.
If you opt then for a specific coupling order, say $| ([j_a\rightarrow j_b]J_{ab}\rightarrow j_c) JM\rangle$, you need to express this representation in terms of the other coupling possibilities.
Note that the two-body intermediate state is assumed to be antisymmetric but not normalized, that is, the state which involves the quantum numbers $j_a$ and $j_b$. Assume that the intermediate two-body state is antisymmetric. With this coupling order, we can rewrite ( in a schematic way) the general three-particle Slater determinant as
with an implicit sum over $J_{ab}$. The antisymmetrization operator ${\cal A}$ is used here to indicate that we need to antisymmetrize the state. Challenge: Use the definition of the $6j$ symbol and find an explicit expression for the above three-body state using the coupling order $| ([j_a\rightarrow j_b]J_{ab}\rightarrow j_c) J\rangle$.
We can also coupled together four angular momenta. Consider two four-body states, with single-particle angular momenta $j_a$, $j_b$, $j_c$ and $j_d$ we can have a state with final $J$
where we read the coupling order as $j_a$ couples with $j_b$ to given and intermediate angular momentum $J_{ab}$. Moreover, $j_c$ couples with $j_d$ to given and intermediate angular momentum $J_{cd}$. The two intermediate angular momenta $J_{ab}$ and $J_{cd}$ are in turn coupled to a final $J$. These operations involved three Clebsch-Gordan coefficients.
Alternatively, we could couple in the following order
The overlap between these two states
is equal to
with the symbol in curly brackets $\{\}$ being the $9j$-symbol. We see that a $6j$ symbol involves four Clebsch-Gordan coefficients, while the $9j$ symbol involves six.
A $9j$ symbol is invariant under reflection in either diagonal
The permutation of any two rows or any two columns yields a phase factor $(-1)^S$, where
As an example we have
A useful case is when say $J=0$ in
The tensor operator in the nucleon-nucleon potential is given by
and it is zero for the $^1S_0$ wave.
How do we get here?
To derive the expectation value of the nuclear tensor force, we recall that the product of two irreducible tensor operators is
and using the orthogonality properties of the Clebsch-Gordan coefficients we can rewrite the above as
Assume now that the operators $T$ and $U$ act on different parts of say a wave function. The operator $T$ could act on the spatial part only while the operator $U$ acts only on the spin part. This means also that these operators commute. The reduced matrix element of this operator is thus, using the Wigner-Eckart theorem,
8 8 3
< < < ! ! M A T H _ B L O C K
Starting with
8 8 5
< < < ! ! M A T H _ B L O C K
we assume now that $T$ acts only on $j_a$ and $j_c$ and that $U$ acts only on $j_b$ and $j_d$. The matrix element $\langle (j_aj_bJM|\left[ T^{p}_{m_p}U^{q}_{m_q} \right]^{r}_{m_r}|(j_cj_d)J'M'\rangle$ can be written out, when we insert a complete set of states $|j_im_ij_jm_j\rangle\langle j_im_ij_jm_j|$ between $T$ and $U$ as
8 8 7
< < < ! ! M A T H _ B L O C K
The complete set of states that was inserted between $T$ and $U$ reduces to $|j_cm_cj_bm_b\rangle\langle j_cm_cj_bm_b|$ due to orthogonality of the states.
Combining the last two equations from the previous slide and and applying the Wigner-Eckart theorem, we arrive at (rearranging phase factors)
8 8 9
< < < ! ! M A T H _ B L O C K
8 9 0
< < < ! ! M A T H _ B L O C K
which can be rewritten in terms of a $9j$ symbol as
From this expression we can in turn compute for example the spin-spin operator of the tensor force.
In case $r=0$, that is we two tensor operators coupled to a scalar, we can use (with $p=q$)
and obtain
Another very useful expression is the case where the operators act in just one space. We state here without showing that the reduced matrix element
8 9 5
< < < ! ! M A T H _ B L O C K
The tensor operator in the nucleon-nucleon potential can be written as
Since the irreducible tensor
$\left[{\bf r} \otimes {\bf r} \right]^{(2)}$
operates only on the angular quantum numbers and
$\left[{\bf \sigma}_1 \otimes {\bf \sigma}_2\right]^{(2)}$
operates only on
the spin states we can write the matrix element
We need that the coordinate vector ${\bf r}$ can be written in terms of spherical components as
Using this expression we get
The product of two spherical harmonics can be written as
9 0 1
< < < ! ! M A T H _ B L O C K
Using this relation we get
We can then use this relation to rewrite the reduced matrix element containing the position vector as
Using the reduced matrix element of the spin operators defined as
and inserting these expressions for the two reduced matrix elements we get
Normally, we start we a nucleon-nucleon interaction fitted to reproduce scattering data.
It is common then to represent this interaction in terms relative momenta $k$, the center-of-mass momentum $K$
and various partial wave quantum numbers like the spin $S$, the total relative angular momentum ${\cal J}$, isospin $T$ and relative orbital momentum $l$ and finally the corresponding center-of-mass $L$.
We can then write the free interaction matrix $V$ as
Transformations from the relative and center-of-mass motion system to the lab system will be discussed below.
To obtain a $V$-matrix in a h.o. basis, we need
the transformation
with $n$ and $N$ the principal quantum numbers of the relative and center-of-mass motion, respectively.
The parameter $\alpha$ is the chosen oscillator length.
The most commonly employed sp basis is the harmonic oscillator, which in turn means that a two-particle wave function with total angular momentum $J$ and isospin $T$ can be expressed as
where the term $\left\langle nlNL| n_al_an_bl_b\right\rangle$ is the so-called Moshinsky-Talmi transformation coefficient (see chapter 18 of Alex Brown's notes).
The term $\langle ab|LSJ \rangle $ is a shorthand for the $LS-jj$ transformation coefficient,
Here we use $\hat{x} = \sqrt{2x +1}$. The factor $F$ is defined as $F=\frac{1-(-1)^{l+S+T}}{\sqrt{2}}$ if $s_a = s_b$ and we .
The $\hat{V}$-matrix in terms of harmonic oscillator wave functions reads
9 1 2
< < < ! ! M A T H _ B L O C K
9 1 3
< < < ! ! M A T H _ B L O C K
9 1 4
< < < ! ! M A T H _ B L O C K
The label $a$ represents here all the single particle quantum numbers
$n_{a}l_{a}j_{a}$.
We assume here that we are only interested in the ground state of the system and expand the exact wave function in term of a series of Slater determinants
where we have assumed that the true ground state is dominated by the solution of the unperturbed problem, that is
The state $\vert \Psi_0\rangle$ is not normalized, rather we have used an intermediate normalization $\langle \Phi_0 \vert \Psi_0\rangle=1$ since we have $\langle \Phi_0\vert \Phi_0\rangle=1$.
The Schroedinger equation is
and multiplying the latter from the left with $\langle \Phi_0\vert $ gives
and subtracting from this equation
and using the fact that the both operators $\hat{H}$ and $\hat{H}_0$ are hermitian results in
which is an exact result. We call this quantity the correlation energy.
This equation forms the starting point for all perturbative derivations. However, as it stands it represents nothing but a mere formal rewriting of Schroedinger's equation and is not of much practical use. The exact wave function $\vert \Psi_0\rangle$ is unknown. In order to obtain a perturbative expansion, we need to expand the exact wave function in terms of the interaction $\hat{H}_I$.
Here we have assumed that our model space defined by the operator $\hat{P}$ is one-dimensional, meaning that
and
We can thus rewrite the exact wave function as
Going back to the Schr\"odinger equation, we can rewrite it as, adding and a subtracting a term $\omega \vert \Psi_0\rangle$ as
where $\omega$ is an energy variable to be specified later.
We assume also that the resolvent of $\left(\omega-\hat{H}_0\right)$ exits, that is it has an inverse which defined the unperturbed Green's function as
We can rewrite Schroedinger's equation as
and multiplying from the left with $\hat{Q}$ results in
which is possible since we have defined the operator $\hat{Q}$ in terms of the eigenfunctions of $\hat{H}$.
These operators commute meaning that
With these definitions we can in turn define the wave function as
This equation is again nothing but a formal rewrite of Schr\"odinger's equation
and does not represent a practical calculational scheme.
It is a non-linear equation in two unknown quantities, the energy $E$ and the exact
wave function $\vert \Psi_0\rangle$. We can however start with a guess for $\vert \Psi_0\rangle$ on the right hand side of the last equation.
The most common choice is to start with the function which is expected to exhibit the largest overlap with the wave function we are searching after, namely $\vert \Phi_0\rangle$. This can again be inserted in the solution for $\vert \Psi_0\rangle$ in an iterative fashion and if we continue along these lines we end up with
for the wave function and
which is now a perturbative expansion of the exact energy in terms of the interaction $\hat{H}_I$ and the unperturbed wave function $\vert \Psi_0\rangle$.
In our equations for $\vert \Psi_0\rangle$ and $\Delta E$ in terms of the unperturbed solutions $\vert \Phi_i\rangle$ we have still an undetermined parameter $\omega$ and a dependecy on the exact energy $E$. Not much has been gained thus from a practical computational point of view.
In Brilluoin-Wigner perturbation theory it is customary to set $\omega=E$. This results in the following perturbative expansion for the energy $\Delta E$
9 3 3
< < < ! ! M A T H _ B L O C K
9 3 5
< < < ! ! M A T H _ B L O C K
This expression depends however on the exact energy $E$ and is again not very convenient from a practical point of view. It can obviously be solved iteratively, by starting with a guess for $E$ and then solve till some kind of self-consistency criterion has been reached.
Actually, the above expression is nothing but a rewrite again of the full Schr\"odinger equation.
Defining $e=E-\hat{H}_0$ and recalling that $\hat{H}_0$ commutes with $\hat{Q}$ by construction and that $\hat{Q}$ is an idempotent operator $\hat{Q}^2=\hat{Q}$. Using this equation in the above expansion for $\Delta E$ we can write the denominator
9 3 7
< < < ! ! M A T H _ B L O C K
Inserted in the expression for $\Delta E$ leads to
In RS perturbation theory we set $\omega = W_0$ and obtain the following expression for the energy difference
9 4 0
< < < ! ! M A T H _ B L O C K
Recalling that $\hat{Q}$ commutes with $\hat{H_0}$ and since $\Delta E$ is a constant we obtain that
Inserting this results in the expression for the energy results in
We can now this expression in terms of a perturbative expression in terms of $\hat{H}_I$ where we iterate the last expression in terms of $\Delta E$
We get the following expression for $\Delta E^{(i)}$
which is just the contribution to first order in perturbation theory,
which is the contribution to second order.
being the third-order contribution.
Let us analyse a given contribution to first first order in perturbation, that is, the contribution includes (more material to come)
Studies of infinite nuclear matter play an important role in nuclear physics. The aim of this part of the lectures is to provide the necessary ingredients for perfoming studies of neutron star matter (or matter in $\beta$-equilibrium) and symmetric nuclear matter. We start however with the electron gas in two and three dimensions for both historical and pedagogical reasons. Since there are several benchmark calculations for the electron gas, this small detour will allow us to establish the necessary formalism. Thereafter we will study infinite nuclear matter
at the Hartree-Fock with realistic nuclear forces and
using many-body methods like coupled-cluster theory or in-medium SRG as discussed in our previous sections.
The electron gas is perhaps the only realistic model of a system of many interacting particles that allows for a solution of the Hartree-Fock equations on a closed form. Furthermore, to first order in the interaction, one can also compute on a closed form the total energy and several other properties of a many-particle systems. The model gives a very good approximation to the properties of valence electrons in metals. The assumptions are
System of electrons that is not influenced by external forces except by an attraction provided by a uniform background of ions. These ions give rise to a uniform background charge. The ions are stationary.
The system as a whole is neutral.
We assume we have $N_e$ electrons in a cubic box of length $L$ and volume $\Omega=L^3$. This volume contains also a uniform distribution of positive charge with density $N_ee/\Omega$.
The homogeneous electron gas is one of the few examples of a system of many interacting particles that allows for a solution of the mean-field Hartree-Fock equations on a closed form. To first order in the electron-electron interaction, this applies to ground state properties like the energy and its pertinent equation of state as well. The homogeneus electron gas is a system of electrons that is not influenced by external forces except by an attraction provided by a uniform background of ions. These ions give rise to a uniform background charge. The ions are stationary and the system as a whole is neutral. Irrespective of this simplicity, this system, in both two and three-dimensions, has eluded a proper description of correlations in terms of various first principle methods, except perhaps for quantum Monte Carlo methods. In particular, the diffusion Monte Carlo calculations of Ceperley and Ceperley and Tanatar are presently still considered as the best possible benchmarks for the two- and three-dimensional electron gas.
The electron gas, in two or three dimensions is thus interesting as a test-bed for electron-electron correlations. The three-dimensional electron gas is particularly important as a cornerstone of the local-density approximation in density-functional theory. In the physical world, systems similar to the three-dimensional electron gas can be found in, for example, alkali metals and doped semiconductors. Two-dimensional electron fluids are observed on metal and liquid-helium surfaces, as well as at metal-oxide-semiconductor interfaces. However, the Coulomb interaction has an infinite range, and therefore long-range correlations play an essential role in the electron gas.
At low densities, the electrons become localized and form a lattice. This so-called Wigner crystallization is a direct consequence of the long-ranged repulsive interaction. At higher densities, the electron gas is better described as a liquid. When using, for example, Monte Carlo methods the electron gas must be approximated by a finite system. The long-range Coulomb interaction in the electron gas causes additional finite-size effects that are not present in other infinite systems like nuclear matter or neutron star matter. This poses additional challenges to many-body methods when applied to the electron gas.
This is a homogeneous system and the one-particle wave functions are given by plane wave functions normalized to a volume $\Omega$ for a box with length $L$ (the limit $L\rightarrow \infty$ is to be taken after we have computed various expectation values)
where $\mathbf{k}$ is the wave number and $\xi_{\sigma}$ is a spin function for either spin up or down
We assume first that the electrons interact via a central, symmetric and translationally invariant interaction $V(r_{12})$ with $r_{12}=|\mathbf{r}_1-\mathbf{r}_2|$. The interaction is spin independent.
The total Hamiltonian consists then of kinetic and potential energy
The operator for the kinetic energy can be written as
with the electronic part
where we have introduced an explicit convergence factor (the limit $\mu\rightarrow 0$ is performed after having calculated the various integrals). Correspondingly, we have
which is the energy contribution from the positive background charge with density $n(\mathbf{r})=N/\Omega$. Finally,
resulting in
The previous result can be rewritten in terms of the density
where $n=N_e/\Omega$, $N_e$ being the number of electrons, and $r_s$ is the radius of a sphere which represents the volum per conducting electron.
It can be convenient to use the Bohr radius $a_0=\hbar^2/e^2m_e$.
For most metals we have a relation $r_s/a_0\sim 2-6$. The quantity $r_s$ is dimensionless.
In the second exercise below we find that the total energy $E_0/N_e=\langle\Phi_{0}|\hat{H}|\Phi_{0}\rangle/N_e$ for for this system to first order in the interaction is given as
a) Show first that
Hint. Hint: Introduce the convergence factor $e^{-\mu\vert\mathbf{r}-\mathbf{r}'\vert}$ in the potential and use $\sum_{\mathbf{k}}\rightarrow \frac{V}{(2\pi)^{3}}\int d\mathbf{k}$
b) Rewrite the above result as a function of the density
where $n=N/V$, $N$ being the number of particles, and $r_s$ is the radius of a sphere which represents the volum per conducting electron.
It can be convenient to use the Bohr radius $a_0=\hbar^2/e^2m$. For most metals we have a relation $r_s/a_0\sim 2-6$. c) Make a plot of the free electron energy and the Hartree-Fock energy and discuss the behavior around the Fermi surface. Extract also the Hartree-Fock band width $\Delta\varepsilon^{HF}$ defined as
Compare this results with the corresponding one for a free electron and comment your results. How large is the contribution due to the exchange term in the Hartree-Fock equation? We will now define a quantity called the effective mass. For $\vert\mathbf{k}\vert$ near $k_{F}$, we can Taylor expand the Hartree-Fock energy as
If we compare the latter with the corresponding expressiyon for the non-interacting system
we can define the so-called effective Hartree-Fock mass as
d) Compute $m_{HF}^{*}$ and comment your results.
e) Show that the level density (the number of single-electron states per unit energy) can be written as
Calculate $n(\varepsilon_{F}^{HF})$ and comment the results.
We want to show that, given the Hartree-Fock equation for the electron gas
the single-particle energy can be written as
We introduce the convergence factor $e^{-\mu\vert\mathbf{r}-\mathbf{r}'\vert}$ in the potential and use $\sum_{\mathbf{k}}\rightarrow \frac{V}{(2\pi)^{3}}\int d\mathbf{k}$. We can then rewrite the integral as
and introducing the abovementioned convergence factor we have
With a change variables to $\mathbf{x} = \mathbf{r}-\mathbf{r}'$ and $\mathbf{y}=\mathbf{r}'$ we rewrite the last integral as
The integration over $\mathbf{x}$ can be performed using spherical coordinates, resulting in (with $x=\vert \mathbf{x}\vert$)
We obtain
This results gives us
where we have used that the integrand on the left-hand side does not depend on $\mathbf{y}$ and that $\int d\mathbf{y}=V$.
Introducing spherical coordinates we can rewrite the integral as
and with the change of variables $\cos{(\theta)}=u$ we have
which gives
Introducing new variables $x=p+k$ and $y=p-k$, we obtain after some straightforward reordering of the integral
which gives the abovementioned expression for the single-particle energy.
Introducing the dimensionless quantity $x=k/k_F$ and the function
we can rewrite the single-particle Hartree-Fock energy as
and dividing by the non-interacting contribution at the Fermi level,
we have
where $a_0=0.0529$ nm is the Bohr radius, setting thereby a natural length scale.
By introducing the radius $r_s$ of a sphere whose volume is the volume occupied by each electron, we can rewrite the previous equation in terms of $r_s$ using that the electron density $n=N/V$
we have (with $k_F=1.92/r_s$,
with $r_s \sim 2-6$ for most metals.
We can now define the so-called band gap, that is the scatter between the maximal and the minimal value of the electrons in the conductance band of a metal (up to the Fermi level). For $x=1$ and $r_s/a_0=4$ we have
and for $x=0$ we have
which results in a gap at the Fermi level of
This quantity measures the deviation from the $k=0$ single-particle energy and the energy at the Fermi level. The general result is
The following python code produces a plot of the electron energy for a free electron (only kinetic energy) and for the Hartree-Fock solution. We have chosen here a ratio $r_s/a_0=4$ and the equations are plotted as funtions of $k/f_F$.
In [1]:
import numpy as np
from math import log
from matplotlib import pyplot as plt
from matplotlib import rc, rcParams
import matplotlib.units as units
import matplotlib.ticker as ticker
rc('text',usetex=True)
rc('font',**{'family':'serif','serif':['Hartree-Fock energy']})
font = {'family' : 'serif',
'color' : 'darkred',
'weight' : 'normal',
'size' : 16,
}
N = 100
x = np.linspace(0.0, 2.0,N)
F = 0.5+np.log(abs((1.0+x)/(1.0-x)))*(1.0-x*x)*0.25/x
y = x*x -4.0*0.663*F
plt.plot(x, y, 'b-')
plt.plot(x, x*x, 'r-')
plt.title(r'{\bf Hartree-Fock single-particle energy for electron gas}', fontsize=20)
plt.text(3, -40, r'Parameters: $r_s/a_0=4$', fontdict=font)
plt.xlabel(r'$k/k_F$',fontsize=20)
plt.ylabel(r'$\varepsilon_k^{HF}/\varepsilon_0^F$',fontsize=20)
# Tweak spacing to prevent clipping of ylabel
plt.subplots_adjust(left=0.15)
plt.savefig('hartreefockspelgas.pdf', format='pdf')
plt.show()
From the plot we notice that the exchange term increases considerably the band gap compared with the non-interacting gas of electrons.
We consider a system of electrons in infinite matter, the so-called electron gas. This is a homogeneous system and the one-particle states are given by plane wave function normalized to a volume $\Omega$ for a box with length $L$ (the limit $L\rightarrow \infty$ is to be taken after we have computed various expectation values)
where $\mathbf{k}$ is the wave number and $\xi_{\sigma}$ is a spin function for either spin up or down
We assume that we have periodic boundary conditions which limit the allowed wave numbers to
We assume first that the particles interact via a central, symmetric and translationally invariant interaction $V(r_{12})$ with $r_{12}=|\mathbf{r}_1-\mathbf{r}_2|$. The interaction is spin independent.
The total Hamiltonian consists then of kinetic and potential energy
a) Show that the operator for the kinetic energy can be written as
Find also the number operator $\hat{N}$ and find a corresponding expression for the interaction $\hat{V}$ expressed with creation and annihilation operators. The expression for the interaction has to be written in $k$ space, even though $V$ depends only on the relative distance. It means that you ned to set up the Fourier transform $\langle \mathbf{k}_i\mathbf{k}_j| V | \mathbf{k}_m\mathbf{k}_n\rangle$.
The Hamiltonian operator is given by
with the electronic part
where we have introduced an explicit convergence factor (the limit $\mu\rightarrow 0$ is performed after having calculated the various integrals). Correspondingly, we have
which is the energy contribution from the positive background charge with density $n(\mathbf{r})=N/\Omega$. Finally,
is the interaction between the electrons and the positive background. b) Show that
and
c) Show thereafter that the final Hamiltonian can be written as
with
and
d) Calculate $E_0/N=\langle \Phi_{0}\vert H\vert \Phi_{0}\rangle/N$ for for this system to first order in the interaction. Show that, by using
with $\rho=N/\Omega$, $r_0$ being the radius of a sphere representing the volume an electron occupies and the Bohr radius $a_0=\hbar^2/e^2m$, that the energy per electron can be written as
Here we have defined $r_s=r_0/a_0$ to be a dimensionless quantity.
e) Plot your results. Why is this system stable? Calculate thermodynamical quantities like the pressure, given by
and the bulk modulus
and comment your results.
We have to show first that
and
And then that the final Hamiltonian can be written as
with
and
Let us now calculate the following part of the Hamiltonian
where $n(\mathbf{r}) = N_e/\Omega$, the density of the positive background charge. We define $\mathbf{r}_{12} = \mathbf{r} - \mathbf{r}'$, resulting in $d\mathbf{r}_{12} = d\mathbf{r}$, and allowing us to rewrite the integral as
Here we have used that $\int \! \mathbf{r} = \Omega$. We change to spherical coordinates and the lack of angle dependencies yields a factor $4\pi$, resulting in
Solving by partial integration
gives
The next term is
Inserting $n(\mathbf{r})$ and changing variables in the same way as in the previous integral $\mathbf{y} = \mathbf{r} - \mathbf{x}_i$, we get $\mathrm{d}^3 \mathbf{y} = \mathrm{d}^3 \mathbf{r}$. This gives
We have already seen this type of integral. The answer is
which gives
Finally, we need to evaluate $\hat H_{el}$. This term reads
The last term represents the repulsion between two electrons. It is a central symmetric interaction and is translationally invariant. The potential is given by the expression
where the sum is taken over all particles in the finite box. The Ewald electron-electron interaction operator can be written as
where $v_{E}(\mathbf{r})$ is the effective two-body interaction and $v_{0}$ is the self-interaction, defined as $v_{0} = \lim_{\mathbf{r} \rightarrow 0} \left\{ v_{E}(\mathbf{r}) - 1/r\right\} $.
The negative electron charges are neutralized by a positive, homogeneous background charge. Fraser et al. explain how the electron-background and background-background terms, $\hat{H}_{eb}$ and $\hat{H}_{bb}$, vanish when using Ewald's interaction for the three-dimensional electron gas. Using the same arguments, one can show that these terms are also zero in the corresponding two-dimensional system.
In the three-dimensional electron gas, the Ewald interaction is
where $\mathbf{u}_{i}$ is the unit vector for dimension $i$, is defined for all integers $n_{x}$, $n_{y}$, and $n_{z}$. These vectors are used to obtain all image cells in the entire real space. The parameter $\eta $ decides how the Coulomb interaction is divided into a short-ranged and long-ranged part, and does not alter the total function. However, the number of operations needed to calculate the Ewald interaction with a desired accuracy depends on $\eta $, and $\eta $ is therefore often chosen to optimize the convergence as a function of the simulation-cell size. In our calculations, we choose $\eta $ to be an infinitesimally small positive number, similarly as was done by Shepherd et al. and Roggero et al..
This gives an interaction that is evaluated only in Fourier space.
When studying the two-dimensional electron gas, we use an Ewald interaction that is quasi two-dimensional. The interaction is derived in three dimensions, with Fourier discretization in only two dimensions. The Ewald effective interaction has the form
where the Fourier vectors $\mathbf{k}$ and the position vector $\mathbf{r}_{xy}$ are defined in the $(x,y)$ plane. When applying the interaction $v_{E}(\mathbf{r})$ to two-dimensional systems, we set $z$ to zero.
Similarly as in the three-dimensional case, also here we choose $\eta $ to approach zero from above. The resulting Fourier-transformed interaction is
where the Kronecker delta functions $\delta_{\mathbf{k}_{p}\mathbf{k}_{r}}$ and $\delta_{\mathbf{k}_{p}\mathbf{k}_{s}}$ ensure that the contribution with zero momentum transfer vanishes.
Similarly, the matrix elements for the two-dimensional electron gas are
In work of Shepherd et al. the matrix elements were defined with the self-interaction constant included in the two-body interaction. This gives Fock-operator matrix elements with a gap constant.
When using periodic boundary conditions, the discrete-momentum single-particle basis functions
are associated with the single-particle energy
for two-dimensional sytems and
for three-dimensional systems.
We choose the single-particle basis such that both the occupied and unoccupied single-particle spaces have a closed-shell structure. This means that all single-particle states corresponding to energies below a chosen cutoff are included in the basis. We study only the unpolarized spin phase, in which all orbitals are occupied with one spin-up and one spin-down electron.
The table illustrates how single-particle energies fill energy shells in a two-dimensional electron box. Here $n_{x}$ and $n_{y}$ are the momentum quantum numbers, $n_{x}^{2} + n_{y}^{2}$ determines the single-particle energy level, $N_{\uparrow \downarrow }$ represents the cumulated number of spin-orbitals in an unpolarized spin phase, and $N_{\uparrow \uparrow }$ stands for the cumulated number of spin-orbitals in a spin-polarized system.
$n_{x}^{2}+n_{y}^{2}$ $n_{x}$ $n_{y}$ $N_{\uparrow \downarrow }$ $N_{\uparrow \uparrow }$ </thead>
0 0 0 2 1
1 -1 0
1 0
0 -1
0 1 10 5
2 -1 -1
-1 1
1 -1
1 1 18 9
4 -2 0
2 0
0 -2
0 2 26 13
5 -2 -1
2 -1
-2 1
2 1
-1 -2
-1 2
1 -2
1 2 42 21 </tbody> </table>
Finally, a useful benchmark for our calculations is the expression for the reference energy $E_0$ per particle. Defining the $T=0$ density $\rho_0$, we can in turn determine in three dimensions the radius $r_0$ of a sphere representing the volume an electron occupies (the classical electron radius) as
In two dimensions the corresponding quantity is
One can then express the reference energy per electron in terms of the dimensionless quantity $r_s=r_0/a_0$, where we have introduced the Bohr radius $a_0=\hbar^2/e^2m$. The energy per electron computed with the reference Slater determinant can then be written as (using hereafter only atomic units, meaning that $\hbar = m = e = 1$)
for the three-dimensional electron gas. For the two-dimensional gas the corresponding expression is (show this)
For an infinite homogeneous system, there are some particular simplications due to the conservation of the total momentum of the particles. By symmetry considerations, the total momentum of the system has to be zero. Both the kinetic energy operator and the total Hamiltonian $\hat{H}$ are assumed to be diagonal in the total momentum $\mathbf{K}$. Hence, both the reference state $\Phi_{0}$ and the correlated ground state $\Psi$ must be eigenfunctions of the operator $\mathbf{\hat{K}}$ with the corresponding eigenvalue $\mathbf{K} = \mathbf{0}$. This leads to important simplications to our different many-body methods. In coupled cluster theory for example, all terms that involve single particle-hole excitations vanish.
The reduced transition probability $B$ is defined in terms of reduced matrix elements of a one-body operator by
With our definition of the reduced matrix element,
the transition probability $B$ depends upon the direction of the transition by the factor of $(2J_{i}+1)$. For electromagnetic transitions $J_{i}$ is that for the higher-energy initial state. But in Coulomb excitation the initial state is usually taken as the ground state, and it is normal to use the notation $B(\uparrow)$ for transitions from the ground state.
The one-body operators $ {\cal O}(\lambda ) $ represent a sum over the operators for the individual nucleon degrees of freedom $ i $
The electric transition operator is given by
were $Y^{\lambda }_{\mu }$ are the spherical harmonics and $q$ stands for proton $q=p$ or neutron $q=n$.
Gamma transitions with $\lambda=0$ are forbidden because the photon must carry off at least one unit of angular momentum. The $e_{q}$ are the electric charges for the proton and neutron in units of $ e $. For the free-nucleon charge we would take $e_{p}=1$ and $e_{n}=0$, for the proton and neutron, respectively. Although the bare operator acts upon the protons, we will keep the general expression in terms of $e_{q}$ in order to incorporate the effective charges for the proton and neutron, which represent the center-of-mass corrections and the average effects of the renormalization from wavefunction admixtures outside the model space.
The magnetic transition operator is given by:
1 0 5 5
< < < ! ! M A T H _ B L O C K
where $\mu_{N}$ is the nuclear magneton,
and where $m_{p}$ is the mass of the proton.
The g-factors $g^{l}_{q}$ and $g^{s}_{q}$ are the orbital and spin g-factors for the proton and neutron, respectively. The free-nucleon values for the g-factors are $g^{l}_{p}=1$, $g^{l}_{n}=0$, $g^{s}_{p}=5.586$ and $g^{s}_{n}=-3.826$. We may use effective values for these g-factors to take into account the truncation of the model space.
The most common types of transitions are $E1$, $E2$ and $M1$. The $E1$ transition operator is given by $\lambda$=1:
The $E2$ transition operator with $\lambda$=2:
The $M1$ transition operator with $\lambda=1$ and with
we have
The selection rules are given by the triangle condition for the angular momenta, $ \Delta (J_{i},J_{f},\lambda ) $. The electromagnetic interaction conserves parity, and the elements of the operators for $ E\lambda $ and $ M\lambda $ can be classified according to their transformation under parity change
where we have $\pi _{O}=(-1)^{\lambda }$ for $Y^{\lambda }$, $\pi _{O}=-1$ for the vectors $\mathbf{r}$, $\mathbf{\nabla}$ and $\mathbf{p}$, and $ \pi _{O}=+1 $ for the pseudo vectors $\mathbf{l}=\mathbf{r}\times\mathbf{p}$ and $\mathbf{\sigma}$. For a given matrix element we have:
The matrix element will vanish unless $\pi _{i}\pi _{f}\pi _{O}=+1$.
The transitions are divided into two classes, those which do not change parity change $\pi _{i}\pi _{f}=+1$ which go by the operators with $\pi _{O}=+1$:
and the ones which do change parity change $\pi _{i}\pi _{f}=-1$ which go by the operators with $\pi _{O}=-1$:
The electromagnetic moment operator can be expressed in terms of the electromagnetic transition operators. By the parity selection rule of the moments are nonzero only for $M1$, $E2$, $M3$, $E4,\ldots$. The most common are:
and
Electromagnetic transitions and moments depend upon the reduced nuclear matrix elements $\langle f\vert\vert {\cal O}(\lambda )\vert\vert i\rangle$. These can be expressed as a sum over one-body transition densities (OBTD) times single-particle matrix elements
where the OBTD is given by
The labels $i$ and $f$ are a short-hand notation for the initial and final state quantum numbers $(n \omega _{i}J_{i})$ and $(n\omega_{f}J_{f})$, respectively. Thus the problem is divided into two parts, one involving the nuclear structure dependent one-body transition densities OBTD, and the other involving the reduced single-particle matrix elements (SPME).
The SPME for the $ E\lambda $ operator is given by:
1 0 7 0
< < < ! ! M A T H _ B L O C K
The SPME for the spin part of the magnetic operator is
1 0 7 2
< < < ! ! M A T H _ B L O C K
1 0 7 3
< < < ! ! M A T H _ B L O C K
1 0 7 4
< < < ! ! M A T H _ B L O C K
where
The SPME for the orbital part of the magnetic operator is:
1 0 7 7
< < < ! ! M A T H _ B L O C K
1 0 7 8
< < < ! ! M A T H _ B L O C K
1 0 7 9
< < < ! ! M A T H _ B L O C K
where we have defined
1 0 8 1
< < < ! ! M A T H _ B L O C K
with
For the $M1$ operator the radial matrix element is
and the SPME simplify to:
1 0 8 5
< < < ! ! M A T H _ B L O C K
1 0 8 6
< < < ! ! M A T H _ B L O C K
where we have
and
1 0 8 9
< < < ! ! M A T H _ B L O C K
1 0 9 0
< < < ! ! M A T H _ B L O C K
where
where $f$ is dimensionless three-body phase-space factor which depends upon the beta-decay $Q$ value, and $K_{o}$ is a specific combination of fundamental constants
The $\pm$ signrefer to $\beta_{\pm}$ decay of nucleus $(A_{i},Z_{i})$ into nucleus $(A_{i},Z_{i} \mp 1) $. The weak-interaction vector ($V$) and axial-vector ($A$) coupling constants for the decay of neutron into a proton are denoted by $g_{V}$ and $g_{A}$, respectively.
The total decay rate for a given initial state is obtained by summing the partial rates over all final states
with the branching fraction to a specific final state given by
Beta decay lifetime are usually given in terms of the half-life with a total half-life of
The partial half-life for a particular final state will be denoted by $ t_{1/2} $
Historically one combines the partial half-life for a particular decay with the calculated phase-space factor $f$ to obtain an ft value given by
where
One often compiles the allowed beta decay in terms of a logft which stands for log$_{10}$ of the $ft_{1/2}$ value.
The values of the coupling constants for Fermi decay,
$g_{V}$, and Gamow-Teller decay, $g_{A}$ are obtained as follows. For a $0^{+} \rightarrow 0^{+}$ nuclear transition $B(GT)=0$, and for a transition between $ T=1 $ analogue states with $B(F)=2$ we find
The partial half-lives and $Q$ values for several $0^{+} \rightarrow 0^{+}$ analogue transitions have been measured to an accuracy of about one part in
This result, together with the value of $K_{o}$ can be used to obtain $g_{V}$.
At the quark level $g_{V}=-g_{A}$. But for nuclear structure we use the value obtained from the neutron to proton beta decay
The operator for Fermi beta decay in terms of sums over the nucleons is
The matrix element is
where
is the total isospin raising and lowering operator for total isospin constructed out of the basic nucleon isospin raising and lowering operators
and
The matrix elements obey the triangle conditions $J_{f}=J_{i}$ ($\Delta J=0$). The Fermi operator has $\pi _{O}=+1$, and thus the initial and final nuclear states must have $\pi _{i}\pi _{f}=+1$ for the matrix element to be nonzero under the parity transform.
When isospin is conserved the Fermi matrix element must obey the isospin triangle condition $T_{f}=T_{i}$ $(\Delta T=0)$, and the Fermi operator can only connect isobaric analogue states.
For $\beta_{-}$ decay
1 1 0 9
< < < ! ! M A T H _ B L O C K
and
1 1 1 1
< < < ! ! M A T H _ B L O C K
For $\beta_{ + }$ we have
1 1 1 3
< < < ! ! M A T H _ B L O C K
For neutron-rich nuclei ($N_{i}> Z_{i}$) we have $T_{i}=T_{zi}$ and thus
and
The reduced single-particle matrix elements are given by
where the matrix elements of $\mathbf{s}$ are given by
1 1 1 8
< < < ! ! M A T H _ B L O C K
with
The matrix elements of $\mathbf{s}$ has the selection rules $\delta_{ \ell_{a} , \ell_{b} }$ and $\delta_{n _{a} ,n _{b} }$. Thus the orbits which are connected by the GT operator are very selective; they are those in the same major oscillator shell with the same $\ell$ value. The matrix elements such as $1s_{1/2}-0d_{3/2}$ which have the allowed $\Delta j$ coupling but are zero due to the $\Delta\ell$ coupling are called $\ell$-forbidden matrix elements.
Sum rules for Fermi and Gamow-Teller matrix elements can be obtained easily.
The sum rule for Fermi is obtained from the sum
The final states $f$ in the $T_{-}$ matrix element go with the $Z_{f}=Z_{i}+1$ nucleus and those in the $T_{+}$ matrix element to with the $Z_{f}=Z_{i}-1$ nucleus. One can explicitly sum over the final states to obtain
1 1 2 2
< < < ! ! M A T H _ B L O C K
The sum rule for Gamow-Teller is obtained as follows
1 1 2 4
< < < ! ! M A T H _ B L O C K
1 1 2 5
< < < ! ! M A T H _ B L O C K
1 1 2 6
< < < ! ! M A T H _ B L O C K
1 1 2 7
< < < ! ! M A T H _ B L O C K
1 1 2 8
< < < ! ! M A T H _ B L O C K
We have used the fact that $\sigma ^{2}_{x} = \sigma ^{2}_{y}=\sigma ^{2}_{z}=1$. When $k \neq k'$ the operators commute and cancel. Thus
and
The sum-rule for the Fermi matrix elements applies even when isospin is not conserved.
For $N > Z$ we usually have $T_{i}=T_{zi}$ which means that $B(F_{+})=0$.
For $N=Z (T_{zi}=0)$ and $T_{i}=0$ we get $B(F_{+})=B(F_{-})=0$, and for $T_{i}=1$ we have $B(F_{+}) = B(F_{-}) = 2$. Fermi transitions which would be zero if isospin is conserved are called isospin-forbidden Fermi transitions.
When $N > Z$ there are some situations where one has $B(GT_{+})=0$, and then we obtain $B(GT_{-}) = 3(N_{i}-Z_{i})$. In particular for the $\beta_{-}$ decay of the neutron we have $B(F_{-})=1$ and $B(GT_{-})=3$.