Morten Hjorth-Jensen, National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA and Department of Physics, University of Oslo, Oslo, Norway
Date: Spring 2017
Hartree-Fock (HF) theory is an algorithm for finding an approximative expression for the ground state of a given Hamiltonian. The basic ingredients are
with the Hartree-Fock Hamiltonian defined as
The term $\hat{u}^{\mathrm{HF}}$ is a single-particle potential to be determined by the HF algorithm.
that is to find a local minimum with a Slater determinant $\Phi_0$ being the ansatz for the ground state.
We will show that the Hartree-Fock Hamiltonian $\hat{h}^{\mathrm{HF}}$ equals our definition of the operator $\hat{f}$ discussed in connection with the new definition of the normal-ordered Hamiltonian (see later lectures), that is we have, for a specific matrix element
meaning that
The so-called Hartree-Fock potential $\hat{u}^{\mathrm{HF}}$ brings an explicit medium dependence due to the summation over all single-particle states below the Fermi level $F$. It brings also in an explicit dependence on the two-body interaction (in nuclear physics we can also have complicated three- or higher-body forces). The two-body interaction, with its contribution from the other bystanding fermions, creates an effective mean field in which a given fermion moves, in addition to the external potential $\hat{u}_{\mathrm{ext}}$ which confines the motion of the fermion. For systems like nuclei, there is no external confining potential. Nuclei are examples of self-bound systems, where the binding arises due to the intrinsic nature of the strong force. For nuclear systems thus, there would be no external one-body potential in the Hartree-Fock Hamiltonian.
Large-scale diagonalization (Iterative methods, Lanczo's method, dimensionalities $10^{10}$ states), discussed in FYS-KJM4480
Coupled cluster theory, favoured method in quantum chemistry, molecular and atomic physics. Applications to ab initio calculations in nuclear physics as well for large nuclei, discussed in FYS-KJM4480
Perturbative many-body methods, discussed in FYS-KJM4480
Density functional theories/Mean-field theory and Hartree-Fock theory,also discussed in FYS-KJM4480
Monte-Carlo methods (Only in FYS4411, Computational quantum mechanics)
Green's function theories (depending on interest)
and other. The physics of the system hints at which many-body methods to use.
Blaizot and Ripka, Quantum Theory of Finite systems, MIT press 1986
Negele and Orland, Quantum Many-Particle Systems, Addison-Wesley, 1987.
Fetter and Walecka, Quantum Theory of Many-Particle Systems, McGraw-Hill, 1971.
Helgaker, Jorgensen and Olsen, Molecular Electronic Structure Theory, Wiley, 2001.
Mattuck, Guide to Feynman Diagrams in the Many-Body Problem, Dover, 1971.
Dickhoff and Van Neck, Many-Body Theory Exposed, World Scientific, 2006.
An operator is defined as $\hat{O}$ throughout. Unless otherwise specified the number of particles is always $N$ and $d$ is the dimension of the system. In nuclear physics we normally define the total number of particles to be $A=N+Z$, where $N$ is total number of neutrons and $Z$ the total number of protons. In case of other baryons such isobars $\Delta$ or various hyperons such as $\Lambda$ or $\Sigma$, one needs to add their definitions. Hereafter, $N$ is reserved for the total number of particles, unless otherwise specificied.
The quantum numbers of a single-particle state in coordinate space are defined by the variable
where
with $d=1,2,3$ represents the spatial coordinates and $\sigma$ is the eigenspin of the particle. For fermions with eigenspin $1/2$ this means that
and the integral $\int dx = \sum_{\sigma}\int d^dr = \sum_{\sigma}\int d{\bf r}$, and
with $x_i=({\bf r}_i,\sigma_i)$ and the projection of $\sigma_i$ takes the values $\{-1/2,+1/2\}$ for particles with spin $1/2$. We will hereafter always refer to $\Psi_{\lambda}$ as the exact wave function, and if the ground state is not degenerate we label it as
with
where the single-particle Hilbert space $\hat{H}_1$ is the space of square integrable functions over $\in {\mathbb{R}}^{d}\oplus (\sigma)$ resulting in
Our Hamiltonian is invariant under the permutation (interchange) of two particles. Since we deal with fermions however, the total wave function is antisymmetric. Let $\hat{P}$ be an operator which interchanges two particles. Due to the symmetries we have ascribed to our Hamiltonian, this operator commutes with the total Hamiltonian,
meaning that $\Psi_{\lambda}(x_1, x_2, \dots , x_N)$ is an eigenfunction of $\hat{P}$ as well, that is
where $\beta$ is the eigenvalue of $\hat{P}$. We have introduced the suffix $ij$ in order to indicate that we permute particles $i$ and $j$. The Pauli principle tells us that the total wave function for a system of fermions has to be antisymmetric, resulting in the eigenvalue $\beta = -1$.
The Schrodinger equation reads
where the vector $x_i$ represents the coordinates (spatial and spin) of particle $i$, $\lambda$ stands for all the quantum numbers needed to classify a given $N$-particle state and $\Psi_{\lambda}$ is the pertaining eigenfunction. Throughout this course, $\Psi$ refers to the exact eigenfunction, unless otherwise stated.
We write the Hamilton operator, or Hamiltonian, in a generic way
where $\hat{T}$ represents the kinetic energy of the system
while the operator $\hat{V}$ for the potential energy is given by
Hereafter we use natural units, viz. $\hbar=c=e=1$, with $e$ the elementary charge and $c$ the speed of light. This means that momenta and masses have dimension energy.
If one does quantum chemistry, after having introduced the Born-Oppenheimer approximation which effectively freezes out the nucleonic degrees of freedom, the Hamiltonian for $N=n_e$ electrons takes the following form
where we have defined
and
The first term of Eq. (3), $H_0$, is the sum of the $N$ one-body Hamiltonians $\hat{h}_0$. Each individual Hamiltonian $\hat{h}_0$ contains the kinetic energy operator of an electron and its potential energy due to the attraction of the nucleus. The second term, $H_I$, is the sum of the $n_e(n_e-1)/2$ two-body interactions between each pair of electrons. Note that the double sum carries a restriction $i < j$.
The potential energy term due to the attraction of the nucleus defines the onebody field $u_i=u_{\mathrm{ext}}(x_i)$ of Eq. (2).
We have moved this term into the $\hat{H}_0$ part of the Hamiltonian, instead of keeping it in $\hat{V}$ as in Eq. (2).
The reason is that we will hereafter treat $\hat{H}_0$ as our non-interacting Hamiltonian. For a many-body wavefunction $\Phi_{\lambda}$ defined by an
appropriate single-particle basis, we may solve exactly the non-interacting eigenvalue problem
with $w_{\lambda}$ being the non-interacting energy. This energy is defined by the sum over single-particle energies to be defined below. For atoms the single-particle energies could be the hydrogen-like single-particle energies corrected for the charge $Z$. For nuclei and quantum dots, these energies could be given by the harmonic oscillator in three and two dimensions, respectively.
We will assume that the interacting part of the Hamiltonian can be approximated by a two-body interaction. This means that our Hamiltonian is written as
with
The onebody part $u_{\mathrm{ext}}(x_i)$ is normally approximated by a harmonic oscillator potential or the Coulomb interaction an electron feels from the nucleus. However, other potentials are fully possible, such as one derived from the self-consistent solution of the Hartree-Fock equations.
Our Hamiltonian is invariant under the permutation (interchange) of two particles. % (exercise here, prove it) Since we deal with fermions however, the total wave function is antisymmetric. Let $\hat{P}$ be an operator which interchanges two particles. Due to the symmetries we have ascribed to our Hamiltonian, this operator commutes with the total Hamiltonian,
meaning that $\Psi_{\lambda}(x_1, x_2, \dots , x_N)$ is an eigenfunction of $\hat{P}$ as well, that is
where $\beta$ is the eigenvalue of $\hat{P}$. We have introduced the suffix $ij$ in order to indicate that we permute particles $i$ and $j$. The Pauli principle tells us that the total wave function for a system of fermions has to be antisymmetric, resulting in the eigenvalue $\beta = -1$.
In our case we assume that we can approximate the exact eigenfunction with a Slater determinant
where $x_i$ stand for the coordinates and spin values of a particle $i$ and $\alpha,\beta,\dots, \gamma$ are quantum numbers needed to describe remaining quantum numbers.
The single-particle function $\psi_{\alpha}(x_i)$ are eigenfunctions of the onebody Hamiltonian $h_i$, that is
with eigenvalues
The energies $\varepsilon_{\alpha}$ are the so-called non-interacting single-particle energies, or unperturbed energies. The total energy is in this case the sum over all single-particle energies, if no two-body or more complicated many-body interactions are present.
Let us denote the ground state energy by $E_0$. According to the variational principle we have
where $\Phi$ is a trial function which we assume to be normalized
where we have used the shorthand $d\mathbf{\tau}=d\mathbf{r}_1d\mathbf{r}_2\dots d\mathbf{r}_N$.
Before we proceed with a more compact representation of a Slater determinant, we would like to repeat some linear algebra properties which will be useful for our derivations of the energy as function of a Slater determinant, Hartree-Fock theory and later the nuclear shell model.
The inverse of a matrix is defined by
A unitary matrix $\mathbf{A}$ is one whose inverse is its adjoint
A real unitary matrix is called orthogonal and its inverse is equal to its transpose. A hermitian matrix is its own self-adjoint, that is
Matrix Properties Reminder.
Relations | Name | matrix elements |
---|---|---|
$A = A^{T}$ | symmetric | $a_{ij} = a_{ji}$ |
$A = \left (A^{T} \right )^{-1}$ | real orthogonal | $\sum_k a_{ik} a_{jk} = \sum_k a_{ki} a_{kj} = \delta_{ij}$ |
$A = A^{ * }$ | real matrix | $a_{ij} = a_{ij}^{ * }$ |
$A = A^{\dagger}$ | hermitian | $a_{ij} = a_{ji}^{ * }$ |
$A = \left (A^{\dagger} \right )^{-1}$ | unitary | $\sum_k a_{ik} a_{jk}^{ * } = \sum_k a_{ki}^{ * } a_{kj} = \delta_{ij}$ |
Since we will deal with Fermions (identical and indistinguishable particles) we will form an ansatz for a given state in terms of so-called Slater determinants determined by a chosen basis of single-particle functions.
For a given $n\times n$ matrix $\mathbf{A}$ we can write its determinant
in a more compact form as
where $\hat{P}_i$ is a permutation operator which permutes the column indices $1,2,3,\dots,n$ and the sum runs over all $n!$ permutations. The quantity $p_i$ represents the number of transpositions of column indices that are needed in order to bring a given permutation back to its initial ordering, in our case given by $a_{11}a_{22}\dots a_{nn}$ here.
A simple $2\times 2$ determinant illustrates this. We have
where in the last term we have interchanged the column indices $1$ and $2$. The natural ordering we have chosen is $a_{11}a_{22}$.
With the above we can rewrite our Slater determinant in a more compact form. In the Hartree-Fock method the trial function is the Slater determinant of Eq. (7) which can be rewritten as
with $p$ standing for the number of permutations. We have introduced for later use the so-called Hartree-function, defined by the simple product of all possible single-particle functions
Furthermore, $\hat{A}$ satisfies
is readily reduced to
Orthogonality of the single-particle functions allows us to further simplify the integral, and we arrive at the following expression for the expectation values of the sum of one-body Hamiltonians
and rewrite Eq. (11) as
which reduces to
The first term is the so-called direct term. It is frequently also called the Hartree term, while the second is due to the Pauli principle and is called the exchange term or just the Fock term. The factor $1/2$ is introduced because we now run over all pairs twice.
The last equation allows us to introduce some further definitions.
The single-particle wave functions $\psi_{\mu}(x)$, defined by the quantum numbers $\mu$ and $x$
are defined as the overlap
and
or for a general matrix element
It has the symmetry property
With these notations we rewrite Eq. (14) as
and an interacting part
The unperturbed part of the Hamiltonian yields the single-particle energies
for two electrons with the same quantum numbers.
Repeat for six electrons (find the relevant harmonic oscillator quantum numbers)
Repeat for 12 and 20 electrons
Write now a program which sets up all relevant quantum numbers for the single-particle basis
The calculus of variations involves problems where the quantity to be minimized or maximized is an integral.
In the general case we have an integral of the type
where $E$ is the quantity which is sought minimized or maximized. The problem is that although $f$ is a function of the variables $\Phi$, $\partial \Phi/\partial x$ and $x$, the exact dependence of $\Phi$ on $x$ is not known. This means again that even though the integral has fixed limits $a$ and $b$, the path of integration is not known. In our case the unknown quantities are the single-particle wave functions and we wish to choose an integration path which makes the functional $E[\Phi]$ stationary. This means that we want to find minima, or maxima or saddle points. In physics we search normally for minima. Our task is therefore to find the minimum of $E[\Phi]$ so that its variation $\delta E$ is zero subject to specific constraints. In our case the constraints appear as the integral which expresses the orthogonality of the single-particle wave functions. The constraints can be treated via the technique of Lagrangian multipliers
Let us specialize to the expectation value of the energy for one particle in three-dimensions. This expectation value reads
with the constraint
and a Hamiltonian
We will, for the sake of notational convenience, skip the variables $x,y,z$ below, and write for example $V(x,y,z)=V$.
The integral involving the kinetic energy can be written as, with the function $\psi$ vanishing strongly for large values of $x,y,z$ (given here by the limits $a$ and $b$),
We will drop the limits $a$ and $b$ in the remaining discussion. Inserting this expression into the expectation value for the energy and taking the variational minimum we obtain
and multiplying with a Lagrangian multiplier $\lambda$ and taking the variational minimum we obtain the final variational equation
We introduce the function $f$
which results in
We can then identify the Lagrangian multiplier as the energy of the system. The last equation is nothing but the standard Schroedinger equation and the variational approach discussed here provides a powerful method for obtaining approximate solutions of the wave function.
In deriving the Hartree-Fock equations, we will expand the single-particle functions in a known basis and vary the coefficients, that is, the new single-particle wave function is written as a linear expansion in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc). We define our new Hartree-Fock single-particle basis by performing a unitary transformation on our previous basis (labelled with greek indices) as
In this case we vary the coefficients $C_{p\lambda}$. If the basis has infinitely many solutions, we need to truncate the above sum. We assume that the basis $\phi_{\lambda}$ is orthogonal. A unitary transformation keeps the orthogonality, as discussed in exercise 1 below.
In the previous slide we stated that a unitary transformation keeps the orthogonality, as discussed in exercise 1 below. To see this consider first a basis of vectors $\mathbf{v}_i$,
We assume that the basis is orthogonal, that is
An orthogonal or unitary transformation
preserves the dot product and orthogonality since
orthogonality is preserved, that is $\langle \alpha \vert \beta\rangle = \delta_{\alpha\beta}$ and $\langle p \vert q\rangle = \delta_{pq}$.
This propertry is extremely useful when we build up a basis of many-body Stater determinant based states.
Note also that although a basis $\vert \alpha\rangle$ contains an infinity of states, for practical calculations we have always to make some truncations.
Before we develop the Hartree-Fock equations, there is another very useful property of determinants that we will use both in connection with Hartree-Fock calculations and later shell-model calculations.
Consider the following determinant
where $det(\mathbf{C})$ and $det(\mathbf{B})$ are the determinants of $n\times n$ matrices
with elements $c_{ij}$ and $b_{ij}$ respectively.
This is a property we will use in our Hartree-Fock discussions. Convince yourself about the correctness of the above expression by setting $n=2$.
With our definition of the new basis in terms of an orthogonal basis we have
If the coefficients $C_{p\lambda}$ belong to an orthogonal or unitary matrix, the new basis is also orthogonal. Our Slater determinant in the new basis $\psi_p(x)$ is written as
which is nothing but $det(\mathbf{C})det(\Phi)$, with $det(\Phi)$ being the determinant given by the basis functions $\phi_{\lambda}(x)$.
It is normal to choose a single-particle basis defined as the eigenfunctions of parts of the full Hamiltonian. The typical situation consists of the solutions of the one-body part of the Hamiltonian, that is we have
The single-particle wave functions $\phi_{\lambda}({\bf r})$, defined by the quantum numbers $\lambda$ and ${\bf r}$ are defined as the overlap
In our discussions hereafter we will use our definitions of single-particle states above and below the Fermi ($F$) level given by the labels $ijkl\dots \le F$ for so-called single-hole states and $abcd\dots > F$ for so-called particle states. For general single-particle states we employ the labels $pqrs\dots$.
In Eq. (16), restated here
we found the expression for the energy functional in terms of the basis function $\phi_{\lambda}({\bf r})$. We then varied the above energy functional with respect to the basis functions $|\mu \rangle$. Now we are interested in defining a new basis defined in terms of a chosen basis as defined in Eq. (17). We can then rewrite the energy functional as
We wish now to minimize the above functional. We introduce again a set of Lagrange multipliers, noting that since $\langle i | j \rangle = \delta_{i,j}$ and $\langle \alpha | \beta \rangle = \delta_{\alpha,\beta}$, the coefficients $C_{i\gamma}$ obey the relation
which allows us to define a functional to be minimized that reads
which yields for every single-particle state $i$ and index $\alpha$ (recalling that the coefficients $C_{i\alpha}$ are matrix elements of a unitary (or orthogonal for a real symmetric matrix) matrix) the following Hartree-Fock equations
we can rewrite the new equations as
The latter is nothing but a standard eigenvalue problem.
It suffices to tabulate the matrix elements $\langle \alpha | h | \beta \rangle$ and $\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}$ once and for all. Successive iterations require thus only a look-up in tables over one-body and two-body matrix elements. These details will be discussed below when we solve the Hartree-Fock equations numerically.
Our Hartree-Fock matrix is thus
The Hartree-Fock equations are solved in an iterative waym starting with a guess for the coefficients $C_{j\gamma}=\delta_{j,\gamma}$ and solving the equations by diagonalization till the new single-particle energies $\epsilon_i^{\mathrm{HF}}$ do not change anymore by a prefixed quantity.
Normally we assume that the single-particle basis $|\beta\rangle$ forms an eigenbasis for the operator $\hat{h}_0$, meaning that the Hartree-Fock matrix becomes
The Hartree-Fock eigenvalue problem
can be written out in a more compact form as
The Hartree-Fock equations are, in their simplest form, solved in an iterative way, starting with a guess for the coefficients $C_{i\alpha}$. We label the coefficients as $C_{i\alpha}^{(n)}$, where the subscript $n$ stands for iteration $n$. To set up the algorithm we can proceed as follows:
We start with a guess $C_{i\alpha}^{(0)}=\delta_{i,\alpha}$. Alternatively, we could have used random starting values as long as the vectors are normalized. Another possibility is to give states below the Fermi level a larger weight.
The Hartree-Fock matrix simplifies then to (assuming that the coefficients $C_{i\alpha} $ are real)
The diagonalization with the new Hartree-Fock potential yields new eigenvectors and eigenvalues. This process is continued till for example
where $\lambda$ is a user prefixed quantity ($\lambda \sim 10^{-8}$ or smaller) and $p$ runs over all calculated single-particle energies and $m$ is the number of single-particle states.
We can rewrite the ground state energy by adding and subtracting $\hat{u}^{HF}(x_i)$
which results in
Our single-particle states $ijk\dots$ are now single-particle states obtained from the solution of the Hartree-Fock equations.
Using our definition of the Hartree-Fock single-particle energies we obtain then the following expression for the total ground-state energy
where $\Phi^{\mathrm{HF}}(N)$ is the new Slater determinant defined by the new basis of Eq. (17) for $N$ electrons (same $Z$). If we assume that the single-particle wave functions in the new basis do not change when we remove one electron or add one electron, we can then define the corresponding energy for the $N-1$ systems as
we obtain
which is just our definition of the Hartree-Fock single-particle energy
These two equations can thus be used to the electron affinity or ionization energies, respectively. Koopman's theorem states that for example the ionization energy of a closed-shell system is given by the energy of the highest occupied single-particle state. If we assume that changing the number of electrons from $N$ to $N+1$ does not change the Hartree-Fock single-particle energies and eigenfunctions, then Koopman's theorem simply states that the ionization energy of an atom is given by the single-particle energy of the last bound state. In a similar way, we can also define the electron affinities.
As an example, consider a simple model for atomic sodium, Na. Neutral sodium has eleven electrons, with the weakest bound one being confined the $3s$ single-particle quantum numbers. The energy needed to remove an electron from neutral sodium is rather small, 5.1391 eV, a feature which pertains to all alkali metals. Having performed a Hartree-Fock calculation for neutral sodium would then allows us to compute the ionization energy by using the single-particle energy for the $3s$ states, namely $\epsilon_{3s}^{\mathrm{HF}}$.
From these considerations, we see that Hartree-Fock theory allows us to make a connection between experimental
observables (here ionization and affinity energies) and the underlying interactions between particles.
In this sense, we are now linking the dynamics and structure of a many-body system with the laws of motion which govern the system. Our approach is a reductionistic one, meaning that we want to understand the laws of motion
in terms of the particles or degrees of freedom which we believe are the fundamental ones. Our Slater determinant, being constructed as the product of various single-particle functions, follows this philosophy.
With similar arguments as in atomic physics, we can now use Hartree-Fock theory to make a link between nuclear forces and separation energies. Changing to nuclear system, we define
where $\Phi^{\mathrm{HF}}(A)$ is the new Slater determinant defined by the new basis of Eq. (17) for $A$ nucleons, where $A=N+Z$, with $N$ now being the number of neutrons and $Z$ th enumber of protons. If we assume again that the single-particle wave functions in the new basis do not change from a nucleus with $A$ nucleons to a nucleus with $A-1$ nucleons, we can then define the corresponding energy for the $A-1$ systems as
which becomes
which is just our definition of the Hartree-Fock single-particle energy
If we then recall that the binding energy differences
define the separation energies, we see that the Hartree-Fock single-particle energies can be used to define separation energies. We have thus our first link between nuclear forces (included in the potential energy term) and an observable quantity defined by differences in binding energies.
We have thus the following interpretations (if the single-particle field do not change)
and
If we use ${}^{16}\mbox{O}$ as our closed-shell nucleus, we could then interpret the separation energy
and
and
We can continue like this for all $A\pm 1$ nuclei where $A$ is a good closed-shell (or subshell closure) nucleus. Examples are ${}^{22}\mbox{O}$, ${}^{24}\mbox{O}$, ${}^{40}\mbox{Ca}$, ${}^{48}\mbox{Ca}$, ${}^{52}\mbox{Ca}$, ${}^{54}\mbox{Ca}$, ${}^{56}\mbox{Ni}$, ${}^{68}\mbox{Ni}$, ${}^{78}\mbox{Ni}$, ${}^{90}\mbox{Zr}$, ${}^{88}\mbox{Sr}$, ${}^{100}\mbox{Sn}$, ${}^{132}\mbox{Sn}$ and ${}^{208}\mbox{Pb}$, to mention some possile cases.
We can thus make our first interpretation of the separation energies in terms of the simplest possible many-body theory. If we also recall that the so-called energy gap for neutrons (or protons) is defined as
for neutrons and the corresponding gap for protons
we can define the neutron and proton energy gaps for ${}^{16}\mbox{O}$ as
and
brings us into the new basis.
The new basis has quantum numbers $a=1,2,\dots,A$.
a) Show that the new basis is orthogonal.
b) Show that the new Slater determinant constructed from the new single-particle wave functions can be written as the determinant based on the previous basis and the determinant of the matrix $C$.
c) Show that the old and the new Slater determinants are equal up to a complex constant with absolute value unity.
Hint. Hint: $C$ is a unitary matrix.
We will assume that we can build various Slater determinants using an orthogonal single-particle basis $\psi_{\lambda}$, with $\lambda = 1,2,\dots,A$.
The aim of this exercise is to set up specific matrix elements that will turn useful when we start our discussions of the nuclear shell model. In particular you will notice, depending on the character of the operator, that many matrix elements will actually be zero.
Consider three $A$-particle Slater determinants $|\Phi_0$, $|\Phi_i^a\rangle$ and $|\Phi_{ij}^{ab}\rangle$, where the notation means that Slater determinant $|\Phi_i^a\rangle$ differs from $|\Phi_0\rangle$ by one single-particle state, that is a single-particle state $\psi_i$ is replaced by a single-particle state $\psi_a$. It will later be interpreted as a so-called one-particle-one-hole excitation. Similarly, the Slater determinant $|\Phi_{ij}^{ab}\rangle$ differs by two single-particle states from $|\Phi_0\rangle$ and is normally thought of as a two-particle-two-hole excitation.
Define a general onebody operator $\hat{F} = \sum_{i}^A\hat{f}(x_{i})$ and a general twobody operator $\hat{G}=\sum_{i>j}^A\hat{g}(x_{i},x_{j})$ with $g$ being invariant under the interchange of the coordinates of particles $i$ and $j$. You can use here the results from the second exercise set, exercise 3.
a) Calculate
and
b) Find thereafter
and
c) Finally, find
and
What happens with the two-body operator if we have a transition probability of the type
where the Slater determinant to the right of the operator differs by more than two single-particle states?
d) With an orthogonal basis of Slater determinants $\Phi_{\lambda}$, we can now construct an exact many-body state as a linear expansion of Slater determinants, that is, a given exact state
In all practical calculations the infinity is replaced by a given truncation in the sum.
If you are to compute the expectation value of (at most) a two-body Hamiltonian for the above exact state
based on the calculations above, which are the only elements which will contribute? (there is no need to perform any calculation here, use your results from exercises a), b), and c)).
These results simplify to a large extent shell-model calculations.
The Hamiltonian for a system of $N$ electrons confined in a harmonic potential reads
with $\hat{V}_{ij}$ is the two-body potential whose matrix elements are calculated on fly in your program. See expression below.
The Hartree-Fock algorithm can be broken down as follows. We recall that our Hartree-Fock matrix is
Normally we assume that the single-particle basis $\vert\beta\rangle$ forms an eigenbasis for the operator $\hat{h}_0$ (this is our case), meaning that the Hartree-Fock matrix becomes
The Hartree-Fock eigenvalue problem
can be written out in a more compact form as
The equations are often rewritten in terms of a so-called density matrix, which is defined as
It means that we can rewrite the Hartree-Fock Hamiltonian as
It is convenient to use the density matrix since we can precalculate in every iteration the product of two eigenvector components $C$.