An ensemble of trajectories obtained from simulating Langevin dynamics will tend to a stable distribution: the Boltzmann distribution.
Two identical magnetic nanoparticles, aligned along their anisotropy axes. The system has 6 degrees of freedom (x,y,z components of magnetisation for each particle) but the energy is defined by the two angles $\theta_1,\theta_2$ alone.
The Boltzmann distribution represents the probabability that the system will be found within certain sets (i.e. the angles of magnetisation). The distribution is parameterised by the temperature of the system and the energy landscape of the problem.
Note the sine terms appear because the distribution is over the solid angles. In other words, the distribution is over the surface of a unit sphere. The sine terms project these solid angles onto a simple elevation angle between the magnetisation and the anisotropy axis ($\theta$).
$$p\left(\theta_1,\theta_2\right) = \frac{\sin(\theta_1)\sin(\theta_2)e^{-E\left(\theta_1,\theta_2\right)/\left(K_BT\right)}}{Z}$$The energy function for a single domain magnetic nanoparticle is given by the Stoner-Wohlfarth equation:
$$\frac{E\left(\theta_1,\theta_2\right)}{K_BT}=-\sigma\left(\cos^2\theta_1+\cos^2\theta_2\right) -\nu\left(3\cos\theta_1\cos\theta_2 - \cos\left(\theta_1-\theta_2\right)\right)$$$$\sigma=\frac{KV}{K_BT}$$$$\nu=\frac{\mu_0V^2M_s^2}{2\pi R^3K_BT}$$$\sigma,\nu$ are the normalised anisotropy and interaction strength respectively.
In [1]:
# dipole interaction energy
def dd(t1, t2, nu):
return -nu*(3*np.cos(t1)*np.cos(t2) - np.cos(t1-t2))
# anisotropy energy
def anis(t1, t2, sigma):
return -sigma*(np.cos(t1)**2 + np.cos(t2)**2)
# total energy
def tot(t1, t2, nu, sigma):
return dd(t1, t2, nu) + anis(t1, t2, sigma)
In [2]:
# numerator of the Boltzmann distribution (i.e. ignoring the partition function Z)
def p_unorm(t1,t2,nu,sigma):
return np.sin(t1)*np.sin(t2)*np.exp(-tot(t1,t2,nu,sigma))
In [3]:
# The Boltzmann distributions for 2 particles
from scipy.integrate import dblquad, quad
def boltz_2d(ts, nu, sigma):
e = np.array([[p_unorm(t1,t2,nu,sigma) for t1 in ts] for t2 in ts])
Z = dblquad(lambda t1,t2: p_unorm(t1,t2,nu,sigma),
0, ts[-1], lambda x: 0, lambda x: ts[-1])[0]
return e/Z
In [4]:
import magpy as mp
import numpy as np
In [5]:
K = 1e5
r = 7e-9
T = 330
Ms=400e3
R=9e-9
kdir = [0, 0, 1]
location1 = np.array([0, 0, 0], dtype=np.float)
location2 = np.array([0, 0, R], dtype=np.float)
direction = np.array([0, 0, 1], dtype=np.float)
alpha = 1.0
In [6]:
model = mp.Model(
anisotropy=np.array([K, K], dtype=np.float),
anisotropy_axis=np.array([kdir, kdir], dtype=np.float),
damping=alpha,
location=np.array([location1, location2], dtype=np.float),
magnetisation=Ms,
magnetisation_direction=np.array([direction, direction], dtype=np.float),
radius=np.array([r, r], dtype=np.float),
temperature=T
)
Simulate an ensemble of 10,000 dimers without interactions.
In [15]:
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-12,
max_samples=500, seeds=range(10000),
n_jobs=8, implicit_solve=True,
interactions=False)
m_z0 = np.array([state['z'][0] for state in res.final_state()])/Ms
m_z1 = np.array([state['z'][1] for state in res.final_state()])/Ms
theta0 = np.arccos(m_z0)
theta1 = np.arccos(m_z1)
System magnetisation shows that the system has relaxed into the local minima (we could relax the system globally but it would take much longer to run since the energy barrier must be overcome).
In [16]:
import matplotlib.pyplot as plt
%matplotlib inline
In [17]:
plt.plot(res.results[0].time, res.ensemble_magnetisation())
plt.title('Non-interacting dimer ensemble magnetisation');
In [18]:
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.get_KB()/T
nu = 0
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
The joint distribution for both angles are computed analytically and compared with the numerical result.
The resulting distrubiton is symmetric because both particles are independent and identically distributed.
In [19]:
plt.hist2d(theta0, theta1, bins=30, normed=True);
ts = np.linspace(min(theta0), max(theta0), 100)
b = boltz_2d(ts, nu, sigma)
plt.contour(ts, ts, b, cmap='Greys')
plt.title('Joint distribution')
plt.xlabel('$\\theta_1$'); plt.ylabel('$\\theta_2$');
We can also compute the marginal distribution (i.e. the equilibrium of just 1 particle). It is easier to see the alignment of the two distributions.
In [20]:
from scipy.integrate import trapz
In [21]:
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta0, bins=50, normed=True)
plt.plot(ts, b_marginal)
Out[21]:
In [22]:
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.get_KB()/T
nu = mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi**2 / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
The interaction strength is very strong (actually the particles are impossibly close). The following command is identical to above except that interactions=True
In [24]:
res = model.simulate_ensemble(end_time=1e-9, time_step=1e-13,
max_samples=500, seeds=range(10000),
n_jobs=8, implicit_solve=False,
interactions=True, renorm=True)
m_z0i = np.array([state['z'][0] for state in res.final_state()])/Ms
m_z1i = np.array([state['z'][1] for state in res.final_state()])/Ms
theta0i = np.arccos(m_z0i)
theta1i = np.arccos(m_z1i)
In [25]:
plt.plot(res.results[0].time, res.ensemble_magnetisation())
plt.title('Interacting dimer ensemble magnetisation');
The stationary distributions align BUT:
introduced fudge factor of $\pi$ into the denominator of the interaction constant $\nu$. In other words we use $$\nu_\textrm{fudge}=\frac{\nu}{\pi}$$
This factor of $1/\pi$ could come from integrating somewhere. I think this is an error with my analytic calculations and not the code. This is because the code certainly uses the correct term for interaction strength, whereas I derived this test myself.
In [29]:
Rs = [6e-9, 7e-9, 8e-9, 9e-9, 10e-9]
pref = [0.57, 0.6, 0.6, 0.6, 0.6]
In [30]:
# Dimensionless parameters
V = 4./3*np.pi*r**3
sigma = K*V/mp.get_KB()/T
nu = 1.0 * mp.get_mu0() * V**2 * Ms**2 / 2.0 / np.pi**2 / R**3 / mp.get_KB() / T
print('Sigma: {:.3f}'.format(sigma))
print(' Nu: {:.3f}'.format(nu))
In [31]:
plt.hist2d(theta0i, theta1i, bins=30, normed=True);
ts = np.linspace(min(theta0), max(theta0), 100)
b = boltz_2d(ts, nu, sigma)
plt.contour(ts, ts, b, cmap='Greys')
plt.title('Joint distribution')
plt.xlabel('$\\theta_1$'); plt.ylabel('$\\theta_2$');
We use the marginal distribution again to check the convergence. We also compare to the interacting case
In [32]:
b_marginal = -b.sum(axis=0) / trapz(ts, b.sum(axis=0))
plt.hist(theta0i, bins=50, normed=True, alpha=0.6, label='Magpy + inter.')
plt.hist(theta0, bins=50, normed=True, alpha=0.6, label='Magpy + no inter.')
plt.plot(ts, b_marginal, label='Analytic')
plt.legend();
One way to test this could be to simulate a 3D system. If another factor of $\pi$ appears then it is definitely something missing from my analytic calculations.