What process turns liquid water into a solid? As temperature lowers, materials transition from a regime where the desire to maximize entropy dominates to desiring to minimize energy. In transitions like liquid-solid, the energy arises from atom-atom forces. This problem falls into a category called first order phase transitions, which are difficult to study.
Instead of looking at the liquid-solid problem to understand phase transitions, we will look at a magnetic material undergoing a second order phase transition. In our approximation of the problem, the atoms exist at fixed positions on a lattice, and interact with their neighbor according to the following Hamiltonian, \begin{equation} {\cal H} = -J \sum_{<i,j>} S^z_i S_j^z , \end{equation} which is basically just the energy. This Ising Model has nearest neighbors interacting, and each spin variable solely points in the $\pm z$ direction.
At a given temperature $T$ (inverse temperature $\beta=1/T$, $k_b=1$), the occupancy of a given configuration $c_i$ follows the Maxwell-Boltzmann Probability Distribution, \begin{equation} P(c_i)=\frac{\mathrm{e}^{-\beta E(c_i)}}{\sum\limits_j \mathrm{e}^{-\beta E(c_j)}}. \end{equation} We need to determine observables given this probability distribution.
I will say a little bit about the tunable parameters here, but Monte Carlo simulations are as much an art as a science, so I leave it up to you to play with the numbers and build intuition for what works.
While in a physical system like a magnet, temperature has a physical meaning, we can create other types of situations, like optimization or approximating a probability distribution, where we use temperature to describe how stuck we are to a particular minimum. This intuition also plays a role in interpreting the results of of our simulation.
Temperature can also provide other complications when we are studying critical phenomena, phase transitions. Near the critical temperature, computations get much more difficult. Elaboration in later post.
Usually normalized out anyway...
Each Monte Carlo time step is one complete sweep of the lattice ($N$ random flip attempts). The more time steps, the more accurate the results, but the longer you have to wait.
Though a time step does include $N$ spin flips, that doesn't mean we have actually moved to a truly new configuration. If we want to randomly sample our configuration space, we need to wait a bit longer to sample. Too short a time, and our measurements are coupled and not accurate. Too long, and we are wasting computations. In a later post, I will look at the autocorrelation time, which is a good way to figure our if your measurement time is good.
Some less important things: periodic versus open boundary conditions, shape and symmetries of simulation cell.
In the post on the Markov Chain, each state was a location on our grid. Now our state is one configuration of all our $N$ spins. That means for an Ising spin we have $2^N$ different configurations. Yeah... We aren't going to be exploring all of those.
So how do we get from one state to the next?
First, we choose a spin that we could flip. This potential spin is chosen randomly for convenience's sake.
Then we use the Metropolis-Hastings Algorithm to decide whether or no to flip the spin and generate a new configuration.
We split the solution into two cases based on $\alpha = \frac{\pi_i}{\pi_j}= \frac{P(c_i)}{P(c_j)}$:
Does this obey detailed balance?
From the Monte Carlo Markov Chain post, we take our equation for detailed balance
\begin{equation} \frac{p_{i \to j}}{ p_{j \to i}} = \frac{\pi_i}{\pi_j} \end{equation}How about ergodicity? We can reach any configuration by flipping spins.
For the Ising Ferromagnet, our $\alpha$ is \begin{equation} \alpha= \frac{Z}{Z} \frac{\mathrm{e}^{-\beta E_i}}{\mathrm{e}^{-\beta E_j}} = \mathrm{e}^{\beta \left(E_j - E_i \right)} \end{equation} which is simply a function of difference in energy between two states. Therefore we don't need to know the absolute energy at each point in time, just how the spin flip changes its local environment.
I will say a little bit about the tunable parameters here, but Monte Carlo simulations are as much an art as a science, so I leave it up to you to play with the numbers and build intuition for what works.
While in a physical system like a magnet, temperature has a physical meaning, we can create other types of situations, like optimization or approximating a probability distribution, where we use temperature to describe how stuck we are to a particular minimum. This intuition also plays a role in interpreting the results of of our simulation.
Temperature can also provide other complications when we are studying critical phenomena, phase transitions. Near the critical temperature, computations get much more difficult. Elaboration in later post.
Usually normalized out anyway...
Each Monte Carlo time step is one complete sweep of the lattice ($N$ random flip attempts). The more time steps, the more accurate the results, but the longer you have to wait.
Though a time step does include $N$ spin flips, that doesn't mean we have actually moved to a truly new configuration. If we want to randomly sample our configuration space, we need to wait a bit longer to sample. Too short a time, and our measurements are coupled and not accurate. Too long, and we are wasting computations. In a later post, I will look at the autocorrelation time, which is a good way to figure our if your measurement time is good.
Some less important things: periodic versus open boundary conditions, shape and symmetries of simulation cell.
In [1]:
using Statistics;
using Plots;
plotly(); # or gr() or pyplot()
push!(LOAD_PATH,"../Packages")
using Lattices
Instead of going into calculating all the lattice parameters again, we will use a class I define in the file Lattices.jl . This class contains
Lattice Types
Once a lattice is created, it contains Members of Type:
name, a stringl, length in number of unit cellsdim, dimension of latticea, array containing the basis vectors by which positions are generatedunit, array of positions inside a single unitN, number of total sitesX, array of positionsnnei, number of nearest neighborsneigh, Array of nearest neighbors [i][j], where i is site and j is 1:nneiToday, I will just look at the square lattice, since that indicates much of the standard phase transition properties. Some of the lattices I have shown (kagome, triangular, ...) are special frustrated lattices, and thus will behave very wierdly in this situation.
In [2]:
## Define l here
l=50;
lt=MakeLattice("Square",l);
S=ones(Int8,l,l); #Our spins
dt=1/(lt.N);
In [3]:
# The energy contribution of just one site
function dE(i::Int)
Eii=0;
for j in 1:lt.nnei
Eii+=S[lt.neigh[i,j]];
end
Eii*=-J*S[i]; # we are computing J sz_i sz_j for one i
return Eii;
end
# The energy of the entire lattice
function E()
Evar=0;
for k in 1:lt.N
Evar+=.5*dE(k);
end
return Evar;
end
# The magnetization of the entire lattice
function M()
Mvar=0;
for k in 1:lt.N
Mvar+=S[k];
end
return Mvar;
end
"defined functions"
Out[3]:
I have set up the simulation so that you can perform two different things. For one, you can set video=true and t to a small variable. Then in a new window you see what the configuration looks like each time you measure.
Or you can set video=false and t to a large variable, and actually measure the statistics for the system over a bunch of configurations.
In [4]:
beta=.2;
J=1;
t=10000;
nskip=10; # don't measure every sweep= better decorrelation
"Parameters set"
Out[4]:
In [5]:
nmeas=floor(Int,t/nskip); # how many times we will measure
Ma=Array{Int32}(undef,nmeas); # our magnetization measurements
Ea=Array{Int32}(undef,nmeas); # our energy measurements
"done"
Out[5]:
In [6]:
tm=1; #Our measurement time step
for ti in 1:t
for j in 1:lt.N
ii = rand(1:lt.N); #Choosing a random site
de=dE(ii);
if(de>0 || rand()<exp(2*beta*de) )
S[ii]=-S[ii]; #Switch the sign
end
end
if isapprox(mod(ti,nskip),0)
Ma[tm]=M();
Ea[tm]=E();
tm+=1;
end
end
Mave=mean(Ma/lt.N);
Mstd=std(Ma/lt.N);
Eave=mean(Ea/lt.N);
Estd=std(Ea/lt.N);
Mave, Mstd
Out[6]:
In [7]:
heatmap(S)
Out[7]:
In [8]:
plot(1:nskip:5000,Ma[1:500]/lt.N)
hline!([Mave])
plot!(xlabel="Monte Carlo Step"
,ylabel="Magetization"
,title="Magnetization versus Time, β=$beta",legend=false)
Out[8]:
In [9]:
histogram(Ma/lt.N,normed=true,label="Samples Normed")
x=collect(Mave-Mstd*3:.001:Mave+Mstd*3)
gaussian=1/(Mstd*sqrt(2*pi))*exp.(-(x.-Mave).^2/(2*Mstd^2));
plot!(x,gaussian,label="Gaussian Fit")
plot!(xlabel="Magnetization",
ylabel="Normed Counts",
title="Magnetization Sample Distribution, β=$beta")
Out[9]:
In [10]:
plot(1:nskip:5000,Ea[1:500]/lt.N)
hline!([Eave])
plot!(xlabel="Monte Carlo Steps",
ylabel="Energy",
title="Energy versus Time, β=$beta",legend=false)
Out[10]:
In [11]:
histogram(Ea/lt.N,normed=true,label="Samples Normed")
x=collect(Eave-3*Estd:.001:Eave+3*Estd)
gaussian=1/(Estd*sqrt(2*pi))*exp.(-(x.-Eave).^2/(2*Estd^2));
plot!(x,gaussian,label="Gaussian Fit")
plot!(xlabel="Energy",
ylabel="Normed Counts",
title="Energy Historam, β=$beta")
Out[11]:
So here are some example results I got.
Note: These pictures came from an old version using Julia 0.4.8 and PyPlot.
Paramagnet Gif
Paramagnet Magnetization Samples. A properly tuned simulations should be showing white noise without correlations or getting stuck in particular areas.
Histogram for Magnetization, $\beta=0.3$
Histogram for Energy $\beta=0.3$
Magnet at $\beta=0.7$ Gif
Histogram for Magnetization
Histogram for Energy
This is an extremely rich problem. Since this post seems long enough for me already, I'll leave these to exercises for you right now, or you can wait until I hold your hand through them.
In [ ]: