The Onsager calculators currently include two computational approaches to determining transport coefficients: an "interstitial" calculation, and a "vacancy-mediated" calculator. Below we describe the
This follows the overall structure of a transport coefficient calculation. Broadly speaking, these are the steps necessary to compute transport coefficients:
The Onsager
code computes transport of defects on an infinite crystalline lattice. Currently, the code requires that the particular defects can be mapped onto Wyckoff positions in a crystal. This does not require that the defect be an atom occupying various Wyckoff positions (though that obviously is captured), but merely that the defect have the symmetry and transitions that can be equivalently described by an "object" that occupies Wyckoff positions. Simple examples include vacancies, substitutional solutes, simple interstitial atoms, as well as more complex cases such as split vacancy defects (e.g.: a V-O$_\text{i}$-V split double vacancy with oxygen interstitial in a closed-packed crystal; the entire defect complex can be mapped on to the Wyckoff position of the oxygen interstitial). In order to calculate diffusion, a few assumptions are made:
All of these assumptions are usually good: the dilute limit is valid without strong interactions (such as site blocking), Markovian processes are valid as long as barriers are a few times $k_\text{B}T$, and we are not currently aware of any (simple) defects that induce phase transformations.
Furthermore, relaxation around a defect (or defect cluster) is allowed, but the assumption is that all of the atomic positions can be easily mapped back to "perfect" crystal lattice sites. This is an "off-lattice" model. In some cases, it can be possible to incorporate "new" states, especially metastable states, that are only accessible by a defect.
Finally, the code requires that all diffusion happens on a single sublattice. This sublattice is defined by a single chemical species; it can include multiple Wyckoff positions. But the current algorithms assume that transitions do not result in the creation of antisite defects (where a chemical species is on an "incorrect" sublattice).
The assumption of translational invariance of our defects is captured by the use of a Crystal
object. Following the standard definition of a crystal, we need to specify (a) three lattice vectors, and (b) at least one basis position, corresponding to at least one site. The crystal needs to contain at least the Wyckoff positions on a single sublattice corresponding to the diffusing defects. It can be useful for it to contain more atoms that act as "spectator" atoms: they do not participate in diffusion, but define both the underlying symmetry of the crystal, and if atomic-scale calculations will be used to compute configuration and transition-state energies, are necessary to define the energy landscape of diffusion.
The lattice vectors of the underlying crystal set the units of length in the transport coefficients. Hence, if the vectors are entered in units of nm, this corresponds to a factor of $10^{-18}\text{ m}^2$ in the transport coefficients. This should also be considered when including factors of volume per atom as well.
numpy
vectors, or as a square numpy
matrix. Note: if you enter the three vectors as a matrix, remember that it assumes the vectors are column vectors. That is, if amat
is the matrix, then amat[:,0]
is $\mathbf{a}_1$, amat[:,1]
is $\mathbf{a}_2$, and amat[:,2]
is $\mathbf{a}_3$. This may not be what you're expecting. The main recommendation is to enter the lattice vectors as a list (or tuple) of three numpy
vectors.numpy
vectors of positions in unit cell coordinates. For a given basis
, then basis[0]
is a list of all positions for the first chemical element in the crystal, basis[1]
is the second chemical element, and so on. If you only have a single chemical element, you may enter a list of numpy
vectors.Once initialized, two main internal operations take place:
Crystal
. This means that the four-atom "simple cubic" unit cell of face-centered cubic can be input, and the code will reduce it to the standard single-atom primitive cell. The reduction algorithm can end up with "unusual" choices of lattice vectors, so we also optimize the lattice vectors so that they are as close to orthogonal as possible, and ordered from smallest to largest. The atomic basis may be shifted uniformly so that if an inversion operation is present, then the inversion center is the origin. Neither choice changes the representation of the crystal; however, the reduction operation can be skipped by including the option noreduce=True
.Note: Crystal
s can also be constructed by manipulating existing Crystal
objects. A useful case is for the interstitial diffuser: when working "interactively," it is often easier to first make the underlying "spectator" crystal, and then have that Crystal
construct the set of Wyckoff positions for a single site in the crystal, and then add that to the basis. Crystal
objects are intended to be read-only, so these manipulations result in the creation of a new Crystal
object.
A few quick examples:
In [1]:
import numpy as np
import sys
sys.path.extend(['.', '..'])
from onsager import crystal
In [2]:
a0 = 1.
FCCcrys = crystal.Crystal([a0*np.array([0,0.5,0.5]),
a0*np.array([0.5,0,0.5]),
a0*np.array([0.5,0.5,0])],
[np.array([0.,0.,0.])], chemistry=['fcc'])
print(FCCcrys)
or by entering the simple cubic unit cell with four atoms:
In [3]:
FCCcrys2 = crystal.Crystal(a0*np.eye(3),
[np.array([0.,0.,0.]), np.array([0,0.5,0.5]),
np.array([0.5,0,0.5]), np.array([0.5,0.5,0])],
chemistry=['fcc'])
print(FCCcrys2)
The effect of noreduce
can be seen by regenerating the FCC crystal using the simple cubic unit cell:
In [4]:
FCCcrys3 = crystal.Crystal(a0*np.eye(3),
[np.array([0.,0.,0.]), np.array([0,0.5,0.5]),
np.array([0.5,0,0.5]), np.array([0.5,0.5,0])],
chemistry=['fcc'], noreduce=True)
print(FCCcrys3)
In [5]:
MgO = crystal.Crystal([a0*np.array([0,0.5,0.5]),
a0*np.array([0.5,0,0.5]),
a0*np.array([0.5,0.5,0])],
[[np.array([0.,0.,0.])], [np.array([0.5,0.5,0.5])]],
chemistry=['Mg', 'O'])
print(MgO)
Interstitials in FCC crystals usually diffuse through a network of octahedral and tetrahedral sites. We can use the Wyckoffpos(u)
function in a crystal to generate a list of equivalent sites corresponding to the interstitial positions, and the addbasis()
function to create a new crystal with these interstitial sites.
In [6]:
octbasis = FCCcrys.Wyckoffpos(np.array([0.5, 0.5, 0.5]))
tetbasis = FCCcrys.Wyckoffpos(np.array([0.25, 0.25, 0.25]))
FCCcrysint = FCCcrys.addbasis(octbasis + tetbasis, ['int'])
print(octbasis)
print(tetbasis)
print(FCCcrysint)
The Interstitial
calculator is designed for systems where we have a single defect species that diffuses throughout the crystal. This includes single vacancy diffusion, and interstitial solute diffusivity. As for any diffusion calculator, we need to define the configurations that the defect will sample, and the transition states of the defect. In the case of a single defect species,
chemistry
index);We use the sitelist(chemistry)
function to construct a list of lists of indices for a given chemistry
; the lists of indices are all symmetrically equivalent crystal basis indices, and each list is symmetrically inequivalent: this is a space group partitioning into equivalent Wyckoff positions.
The transition states are stored as a jumpnetwork
, which is a list of lists of tuples of transitions: (initial index, final index, deltax)
where the indices are self-explanatory, and deltax
is a Cartesian vector corresponding to the translation from the initial state to the final state. The transitions in each list is equivalent by symmetry, and the separate lists are symmetrically inequivalent. Note also that reverse transitions are included: (final index, initial index, -deltax)
. While the jumpnetwork can be constructed "by hand," it is recommended to use the jumpnetwork()
function inside of a crystal to automate the generation, and then remove "spurious" transitions that are identified.
The algorithm in jumpnetwork()
is rather simple: a transition is included if
The first criterion identifies "close" jumps, while the second criterion eliminates "long" transitions between states when an intermediate configuration may be possible (i.e., $\text{A}\to\text{B}$ when $\text{A}\to\text{C}\to\text{B}$ would be more likely as the state C is "close" to the line connecting A to B), and the final criterion elimates transitions that takes the defect too close to a "spectator" atom in the crystal.
The interstitial diffuser also identifies unique tags for all configurations and transition states. The interstitial tags for configurations are strings with i:
followed by unit cell coordinates of site to three decimal digits. The interstitial tags for transition states are strings with i:
followed by the unit cell coordinates of the initial state, a ^
, and the unit cell coordinates of the final state. When one pretty-prints the interstitial diffuser object, the symmetry unique tags are printed. Note that all of the symmetry equivalent tags are stored in the object, and can be used to identify configurations and transition states, and this is the preferred method for indexing, rather than relying on the particular index into the corresponding lists. The interstitial diffuser calculator contains dictionaries that can be used to convert from tags to indices and vice versa.
Finally, YAML
interfaces to output the sitelist
and jumpnetwork
for an interstitial diffuser are includes; combined with the YAML
output of the Crystal, this allows for a YAML
serialized representation of the diffusion object.
In [7]:
from onsager import OnsagerCalc
In [8]:
chem = 0
FCCsitelist = FCCcrys.sitelist(chem)
print(FCCsitelist)
In [9]:
chem = 0
FCCjumpnetwork = FCCcrys.jumpnetwork(chem, cutoff=a0*0.78)
for n, jn in enumerate(FCCjumpnetwork):
print('Jump type {}'.format(n))
for (i,j), dx in jn:
print(' {} -> {} dx= {}'.format(i,j,dx))
In [10]:
chem = 0
FCCvacancydiffuser = OnsagerCalc.Interstitial(FCCcrys, chem, FCCsitelist, FCCjumpnetwork)
print(FCCvacancydiffuser)
In [11]:
chem = 1
MgOsitelist = MgO.sitelist(chem)
print(MgOsitelist)
In [12]:
chem = 1
MgOjumpnetwork = MgO.jumpnetwork(chem, cutoff=a0*0.78)
for n, jn in enumerate(MgOjumpnetwork):
print('Jump type {}'.format(n))
for (i,j), dx in jn:
print(' {} -> {} dx= {}'.format(i,j,dx))
In [13]:
chem = 1
MgOdiffuser = OnsagerCalc.Interstitial(MgO, chem, MgOsitelist, MgOjumpnetwork)
print(MgOdiffuser)
Interstitials in FCC crystals usually diffuse through a network of octahedral and tetrahedral sites. Nominally, diffusion should occur through an octahedral-tetrahedral jumps, but we can extend the cutoff distance to find additinoal jumps between tetrahedrals.
In [14]:
chem = 1
FCCintsitelist = FCCcrysint.sitelist(chem)
print(FCCintsitelist)
In [15]:
chem = 1
FCCintjumpnetwork = FCCcrysint.jumpnetwork(chem, cutoff=a0*0.51)
for n, jn in enumerate(FCCintjumpnetwork):
print('Jump type {}'.format(n))
for (i,j), dx in jn:
print(' {} -> {} dx= {}'.format(i,j,dx))
In [16]:
chem = 1
FCCintdiffuser = OnsagerCalc.Interstitial(FCCcrysint, chem,
FCCintsitelist, FCCintjumpnetwork)
print(FCCintdiffuser)
The YAML
representation is intended to combine both the structural information necessary to construct the (1) crystal, (2) chemistry index of the diffusing defect, (3) sitelist, and (4) jumpnetwork; and the energies, prefactors, and elastic dipoles (derivative of energy with respect to strain) for the symmetry representatives of configurations and jumps. This will become input for the diffuser when computing transport coefficients as a function of temperature, as well as derivatives with respect to strain (elastodiffusion tensor, activation volume tensor).
In [17]:
print(FCCintdiffuser.crys.simpleYAML() +
'chem: {}\n'.format(FCCintdiffuser.chem) +
FCCintdiffuser.sitelistYAML(FCCintsitelist) +
FCCintdiffuser.jumpnetworkYAML(FCCintjumpnetwork))
For the vacancy mediated diffuser, the configurations and transition states are more complicated. First, we have three types of configurations:
The vacancies and solutes are assumed to be able to occupy the same sites in the crystal, and that neither the vacancy or solute lowers the underlying symmetry of the site. This is a rephrasing of our previous assumption that the symmetry of the defect can be mapped onto the symmetry of the crystal Wyckoff position. There are cases where this is not true: that is, some solutes, when substituted into a crystal, will relax in a way that breaks symmetry. While mathematically this can be treated, we do not currently have an implementation that supports this.
The complexes are only considered out to a finite distance; this is called the "thermodynamic range." It is defined in terms of "shells," which is the number of "jumps" from the solute in order to reach the vacancy. We include one more shell out, called the "kinetic range," which are complexes that include transitions to complexes in the thermodynamic range.
When we consider transition states, we have three types of transition states:
These are called, in the "five-frequency framework", omega-0, omega-1, and omega-2 jumps, respectively. The five-frequency model technically identifies omega-1 jumps as only between complexes in the thermodynamic range, while the two "additional" jump types, omega-3 and omega-4, connect complexes in the kinetic range to the thermodynamic range. Operationally, we combine omega-1, -3, and -4 into a single set.
To make a diffuser, we need to
sitelist
of the vacancies (and hence, solutes),jumpnetwork
of the vacanciesthen, the diffuser automatically constructs the complexes out to the thermodynamic range, and the full jumpnetworks.
The vacancy-mediated diffuser also identifies unique tags for all configurations and transition states. The tags for configurations are strings with
v:
followed by unit cell coordinates of site to three decimal digits for the vacancy;s:
followed by unit cell coordinates of site to three decimal digits for the solute;s:...-v:...
for a solute-vacancy complex.The transition states are strings with
omega0:
+ (initial vacancy configuration) + ^
+ (final vacancy configuration);omega1:
+ (initial solute-vacancy configuration) + ^
+ (final vacancy configuration);omega2:
+ (initial solute-vacancy configuration) + ^
+ (final solute-vacancy configuration).When one pretty-prints the vacancy-mediated diffuser object, the symmetry unique tags are printed. Note that all of the symmetry equivalent tags are stored in the object, and can be used to identify configurations and transition states, and this is the preferred method for indexing, rather than relying on the particular index into the corresponding lists. The vacancy-mediated diffuser calculator contains dictionaries that can be used to convert from tags to indices and vice versa.
We construct the Onsager
equivalent of the classic five-frequency model. We can use the sitelist and jumpnetwork that we already constructed for the vacancy by itself. Note that the omega-1 list contains four jumps: one that is the normally identified "omega-1", and three others that correspond to vacancy "escapes" from the first neighbor complex: to the second, third, and fourth neighbors. In the classic five-frequency model, these rates are all forced to be equal.
In [18]:
chem = 0
fivefreqdiffuser = OnsagerCalc.VacancyMediated(FCCcrys, chem,
FCCsitelist, FCCjumpnetwork, 1)
print(fivefreqdiffuser)
An HDF5
representation of the diffusion calculator can be stored for efficient reconstruction of the object, as well as passing between machines. The HDF5
representation includes everything: the underlying Crystal
, the sitelist
and jumpnetwork
s, all of the precalculation and analysis needed for diffusion. This greatly speeds up the construction of the calculator.
In [19]:
import h5py
# replace '/dev/null' with your file of choice, and remove backing_store=False
# to read and write to an HDF5 file.
f = h5py.File('/dev/null', 'w', driver='core', backing_store=False)
fivefreqdiffuser.addhdf5(f) # adds the diffuser to the HDF5 file
# how to read in (after opening `f` as an HDF5 file)
fivefreqcopy = OnsagerCalc.VacancyMediated.loadhdf5(f) # creates a new diffuser from HDF5
f.close() # close up the HDF5 file
print(fivefreqcopy)
At this stage, we have the diffusion "calculator" necessary to compute diffusion, but we need to determine appropriate atomic-scale data to act as input into our calculators. There are two primary steps: (1) constructing appropriate "supercells" containing defect configurations and transition states to be computed, and (2) extracting the appropriate information from those calculations to use in the diffuser. This section deals with the former; the next section will deal with the latter.
The tags are the most straightforward way to identify structures as they are computed, and hence they serve as the mechanism for communicating data into the calculators. To make supercells with defects, we take advantage of the supercell
module in Onsager
; both calculators contain a makesupercell()
function that returns dictionaries of supercells, tags, and appropriate information. Currently, to transform these into usable input files, the automator
module can convert such dictionaries into tarballs with an appropriate directory structure, files containing information about appropriate tags for the different configurations, a Makefile
that converts CONTCAR
output into appropriate POS
input for the nudged-elastic band calculation.
Both makesupercell()
commands require an input supercell definition, which is a $3\times3$ integer matrix of column vectors; if N
is such a matrix, then the supercell vectors are the columns of A = np.dot(a, N)
, so that $\mathbf A_1$ has components N[:,0]
in direct coordinates.
In [20]:
from onsager import automator
import tarfile
In [21]:
help(FCCintdiffuser.makesupercells)
In [22]:
N = np.array([[-2,2,2],[2,-2,2],[2,2,-2]]) # 32 atom FCC supercell
print(np.dot(FCCcrys.lattice, N))
FCCintsupercells = FCCintdiffuser.makesupercells(N)
In [23]:
help(automator.supercelltar)
In [24]:
with tarfile.open('io-test-int.tar.gz', mode='w:gz') as tar:
automator.supercelltar(tar, FCCintsupercells)
In [25]:
tar = tarfile.open('io-test-int.tar.gz', mode='r:gz')
In [26]:
tar.list()
Contents of the Makefile
:
In [27]:
with tar.extractfile('Makefile') as f:
print(f.read().decode('ascii'))
Contents of the tags.json
file:
In [28]:
with tar.extractfile('tags.json') as f:
print(f.read().decode('ascii'))
Contents of one POSCAR
file for relaxation of a configuration:
In [29]:
with tar.extractfile('relax.00/POSCAR') as f:
print(f.read().decode('ascii'))
In [30]:
tar.close()
We will need to construct (and relax) appropriate vacancy, solute, and solute-vacancy complexes, and the transition states between them. The commands are nearly identical to the interstitial diffuser; the primary difference is the larger number of configurations and files.
In [31]:
help(fivefreqdiffuser.makesupercells)
In [32]:
N = np.array([[-3,3,3],[3,-3,3],[3,3,-3]]) # 108 atom FCC supercell
print(np.dot(FCCcrys.lattice, N))
fivefreqsupercells = fivefreqdiffuser.makesupercells(N)
In [33]:
with tarfile.open('io-test-fivefreq.tar.gz', mode='w:gz') as tar:
automator.supercelltar(tar, fivefreqsupercells)
In [34]:
tar = tarfile.open('io-test-fivefreq.tar.gz', mode='r:gz')
In [35]:
tar.list()
Contents of Makefile
:
In [36]:
with tar.extractfile('Makefile') as f:
print(f.read().decode('ascii'))
Contents of the tags.json
file:
In [37]:
with tar.extractfile('tags.json') as f:
print(f.read().decode('ascii'))
Contents of one POSCAR
file for relaxation of a configuration:
In [38]:
with tar.extractfile('relax.01/POSCAR') as f:
print(f.read().decode('ascii'))
In [39]:
tar.close()
Once the atomic-scale data from an appropriate total energy calculation is finished, the data needs to be input into formats that the appropriate diffusion calculator can understand. There are some common definitions between the two, but some differences as well.
In all cases, we work with the assumption that our states are thermally occupied, and our rates are Arrhenius. That means that the (relative) probability of any state can be written as $$\rho = Z^{-1}\rho^0 \exp(-E/k_\text{B}T)$$
for the partition function $Z$, a site entropic term $\rho^0 = \exp(S/k_\text{B})$, and energy $E$. The transition rate from state A to state B is given by $$\lambda(\text{A}\to\text{B}) = \frac{\nu^\text{T}_{\text{A}-\text{B}}}{\rho^0_A} \exp(-(E^\text{T}_{\text{A}-\text{B}} - E_\text{A})/k_\text{B}T)$$
where $E^\text{T}_{\text{A}-\text{B}}$ is the energy of the transition state between A and B, and $\nu^\text{T}_{\text{A}-\text{B}}$ is the prefactor for the transition state.
If we assume harmonic transition state theory, then we can write the site entropic term $\rho^0$ as $$\rho^0 = \frac{\prod \nu^{\text{perfect-supercell}}}{\prod \nu^{\text{defect-supercell}}}$$
where $\nu$ are the vibrational eigenvalues of the corresponding supercells, and the prefactor for the transition state is $$\nu^\text{T} = \frac{\prod \nu^{\text{perfect-supercell}}}{\prod_{\nu^2>0} \nu^{\text{transition state}}}$$
where we take the product over the real vibrational frequencies in the transition state (there should be one imaginary mode). From a practical point of view, the perfect-supercell cancels out; we will often set $\rho^0$ to 1 for a single state (so that the other $\rho^0$ are relative probabilities), and then $\nu^\text{T}$ becomes more similar to the attempt frequency for the particular jumps. The definitions above map most simply onto a "hopping atom" approximation for the jump rates: the $3\times3$ force-constant matrix is computed for the atom that is moving in the transition, and its eigenvalues are used to determine the modes $\nu$.
Note the units: $\rho^0$ is unitless, while $\nu^\text{T}$ has units of inverse time; this means that the inverse time unit in the computed transport coefficients will come from $\nu^\text{T}$ values. If they are entered in THz, that contributes $10^{12}\text{ s}^{-1}$.
Because we normalize our probabilities, our energies and transition state energies are relative to each other. In all of our calculations, we will multiply energies by $\beta=(k_\text{B}T)^{-1}$ to get a unitless values as inputs for our diffusion calculators. This means that the diffusers do not have direct information about temperature; explicit temperature factors that appear in the Onsager coefficients must be included by hand from the output transport coefficients. It also means that the calculators do not have a "unit" of energy; rather, $k_\text{B}T$ and the energies must be in the same units.
We need to compute prefactors and energies for our interstitial diffuser. We can also include information about elastic dipoles (derivatives of energy with respect to strain) in order to compute derivatives of diffusivity with respect to strain (elastodiffusion).
In [40]:
help(FCCintdiffuser.diffusivity)
The ordering in the lists pre
, beteene
, preT
and betaeneT
corresponds to the sitelist
and jumpnetwork
lists. The tags can be used to determine the proper indices. The most straightforward way to store this in python is a dictionary, where the key is the tag, and the value is a list of [prefactor, energy]
. The advantage of this is that it can be easily transformed to and from JSON
for simple serialization.
To see a full list of all tags in the dictionary, the tags
member of a diffuser gives a dictionary of all tags, ordered to match the structure of sitelist
and jumpnetwork
.
In [41]:
FCCintdiffuser.tags
Out[41]:
In this example, the energy of the octahedral site is 0, with a base prefactor of 1. The tetrahedral site has an energy of 0.5 (eV) above, with a higher relative vibrational degeneracy of 2. The transition state energy from octahedral to tetrahedral is 1.0 (eV) with a prefactor of 10 (THz); and the transition state energy from tetrahedral to tetrahedral is 2.0 (eV) with a prefactor of 50 (THz).
In [42]:
FCCintdata = {
'i:+0.500,+0.500,+0.500': [1., 0.],
'i:+0.750,+0.750,+0.750': [2., 0.5],
'i:+0.500,+0.500,+0.500^i:+0.750,+0.750,-0.250': [10., 1.0],
'i:+0.750,+0.750,+0.750^i:+1.250,+1.250,+0.250': [50., 2.0]
}
In [43]:
# Conversion from dictionary to lists for a given kBT
# We go through the tags in order, and find one in our data set.
kBT = 0.25 # eV; a rather high temperature
pre = [FCCintdata[t][0] for taglist in FCCintdiffuser.tags['states']
for t in taglist if t in FCCintdata]
betaene = [FCCintdata[t][1]/kBT for taglist in FCCintdiffuser.tags['states']
for t in taglist if t in FCCintdata]
preT = [FCCintdata[t][0] for taglist in FCCintdiffuser.tags['transitions']
for t in taglist if t in FCCintdata]
betaeneT = [FCCintdata[t][1]/kBT for taglist in FCCintdiffuser.tags['transitions']
for t in taglist if t in FCCintdata]
print(pre,betaene,preT,betaeneT,sep='\n')
In [44]:
DFCCint, dDFCCint = FCCintdiffuser.diffusivity(pre, betaene, preT, betaeneT, CalcDeriv=True)
print(DFCCint, dDFCCint, sep='\n')
The interpretation of this output will be described below.
We will need to compute prefactors and energies for our vacancy, solute, and solute-vacancy complexes, and the transition states between them. The difference compared with the interstitial case is that complex prefactors and energies are excess quantities. That means for a complex, its $\rho^0$ is the product of $\rho^0$ for the solute state, the vacancy state, and the excess; the energy $E$ is the sum of the energy of the solute state, the vacancy state, and the excess. However for the transition states, the prefactors and energies are "absolute".
In [45]:
help(fivefreqdiffuser.Lij)
The vacancy-mediated diffuser expects combined $\beta F := (E - TS)/k_\text{B}T$, so that our probabilities and rates are proportional to $\exp(-\beta F)$. This is complicated to directly construct, so we have the intermediate function preene2betafree()
, which is best used by feeding a dictionary of arrays:
In [46]:
help(fivefreqdiffuser.preene2betafree)
Even this is a bit complicated; so we use an additional function that maps the tags into the appropriate lists, tags2preene()
:
In [47]:
help(fivefreqdiffuser.tags2preene)
In this example, we have a vacancy-solute binding energy of -0.25 (eV), a vacancy jump barrier of 1.0 (eV) with a prefactor of 10 (THz), an "omega-1" activation barrier of 0.75 (eV) which is a transition state energy of 0.75-0.25 = 0.5, an omega-2 activation barrier of 0.5 (eV) which is a transition state energy of 0.5-0.25 = 0.25, and all of the "omega-3/-4" escape jumps with a transition state energy of 1-0.25/2 = 0.875 (eV).
In [48]:
fivefreqdata = {
'v:+0.000,+0.000,+0.000': [1., 0.],
's:+0.000,+0.000,+0.000': [1., 0.],
's:+0.000,+0.000,+0.000-v:+0.000,-1.000,+0.000': [1., -0.25],
'omega0:v:+0.000,+0.000,+0.000^v:+0.000,+1.000,+0.000': [10., 1.],
'omega1:s:+0.000,+0.000,+0.000-v:+1.000,+0.000,-1.000^v:+1.000,+1.000,-1.000': [10., 0.5],
'omega1:s:+0.000,+0.000,+0.000-v:+1.000,-1.000,+0.000^v:+1.000,+0.000,+0.000': [20., 0.875],
'omega1:s:+0.000,+0.000,+0.000-v:-1.000,+1.000,+0.000^v:-1.000,+2.000,+0.000': [20., 0.875],
'omega1:s:+0.000,+0.000,+0.000-v:+0.000,+1.000,+0.000^v:+0.000,+2.000,+0.000': [20., 0.875],
'omega2:s:+0.000,+0.000,+0.000-v:+0.000,-1.000,+0.000^s:+0.000,+0.000,+0.000-v:+0.000,+1.000,+0.000':
[10., 0.25]
}
In [49]:
# Conversion from dictionary to lists for a given kBT
# note that we can nest the mapping functions.
kBT = 0.25 # eV; a rather high temperature
fivefreqpreene = fivefreqdiffuser.tags2preene(fivefreqdata)
fivefreqbetaF = fivefreqdiffuser.preene2betafree(kBT, **fivefreqpreene)
L0vv, Lss, Lsv, L1vv = fivefreqdiffuser.Lij(*fivefreqbetaF)
print(L0vv, Lss, Lsv, L1vv, sep='\n')
The interpretation of this output will be described below.
The final step is to take the output from the diffuser calculator, and convert this into physical quantities: solute diffusivity, elastodiffusivity, Onsager coefficients, drag ratios, and so on.
There are two underlying definitions that we use to define our transport coefficients: $$\mathbf j = -\underline D \nabla c$$
defines the solute diffusivity as the tensorial transport coefficient that relates defect concentration gradients to defect fluxes, and $$\mathbf j^\text{s} = -\underline L^\text{ss}\nabla\mu^\text{s} - \underline L^\text{sv}\nabla\mu^\text{v}$$
$$\mathbf j^\text{v} = -\underline L^\text{vv}\nabla\mu^\text{v} - \underline L^\text{sv}\nabla\mu^\text{s}$$defines the Onsager coefficients as the tensorial transport coefficients that relate solute and vacancy chemical potential gradients to solute and vacancy fluxes. We use these equation to also define the units of our transport coefficients. Fluxes are in units of (number)/area/time, so with concentration in (number)/volume, diffusivity has units of area/time. If the chemical potential is written in units of energy, the Onsager coefficients have units of (number)/length/energy/time. If the chemical potentials will instead have units of energy/volume, then the corresponding Onsager coefficients have units of area/energy/time.
Below are more specific details about the different calculators and the output available.
The interstitial diffuser outputs a diffusivity tensor that has the units of squared length based on the lengths in the corresponding Crystal
, and inverse time units corresponding to the rates that are given as input: the ratio of transition state prefactors to configuration prefactors. In a crystalline system, it is typical to specify the lattice vectors in either nm ($10^{-9}\text{ m}$) or Å ($10^{-10}\text{ m}$), and the prefactors of rates are often THz ($10^{12}\text{ s}$), while diffusivity is often reported in either $\text{m}^2/\text{s}$ or $\text{cm}^2/\text{s}$. The conversion factors are
$$1\text{ nm}^2\cdot\text{THz} = 10^{-6}\text{ m}^2/\text{s} = 10^{-2}\text{ cm}^2/\text{s}$$
It it worth noting that this model of diffusion assumes that the "interstitial" form of the defect is its ground state configuration (or at least one of the configurations used in the derivation of the diffusivity is a ground state configuration). This is generally the case for the diffusion of a vacancy, or light interstitial elements; however, the are materials where a solute has a lower energy as a substitutional defect, but can occupy an interstitial site and diffuse from there. This requires knowledge of the relative occupancy of the two states. Using Kroger-Vink notation, let [B] be the total solute concentration, and $[\text{B}_\text{A}]$ and $[\text{B}_\text{i}]$ the substitutional and interstitial concentrations, then $$D_\text{B} = \left\{[\text{B}_\text{i}]D_\text{int} + [\text{B}_\text{A}]D_\text{sub}\right\}/[\text{B}]$$
for interstitial diffusivity $D_\text{int}$ and substitutional diffusivity $D_\text{sub}$. The relative occupancies may be determined by global thermal equilibrium or local thermal equilibrium. The latter is more complex, and relies on knowledge of local defect processes and conditions, and is not discussed further here. For global thermal equilibrium, if we know the energy of the ground state substitutional defect $E_\text{sub}$ and the lowest energy configuration used by the diffuser $E_\text{int}$, then $$[\text{B}_\text{i}]/[\text{B}] = (1 + \exp((E_\text{int}-E_\text{sub})/k_\text{B}T)^{-1} \approx \exp(-(E_\text{int}-E_\text{sub})/k_\text{B}T)$$
and $$[\text{B}_\text{A}]/[\text{B}] = (1 + \exp(-(E_\text{int}-E_\text{sub})/k_\text{B}T)^{-1} \approx 1$$
where the approximations are valid when $E_\text{int}-E_\text{sub}\gg k_\text{B}T$.
At any given temperature, the temperature dependence of the diffusivity can be taken as an Arrhenius form, $$\underline D = \underline D_0 \exp(-\beta \underline E^\text{act})$$
for inverse temperature $\beta = (k_\text{B}T)^{-1}$, and the activation barrier, $\underline E^\text{act}$ can also display anisotropy. Note that in this expression, the exponential is taken on a per-component basis, not as a true tensor exponential. We can compute $Q$ by taking the per-component logarithmic derivative with respect to inverse temperature, $$\underline E^\text{act} = -\underline D^{-1/2}\frac{d\underline D}{d\beta}\underline D^{-1/2}$$
The diffusivity()
function with CalcDeriv=True
returns a second tensorial quantity, dD
which when multiplied by $k_\text{B}T$, gives $d\underline D/d\beta$. Hence, to compute the activation barrier tensor, we evaluate:
In [50]:
print(np.dot(np.linalg.inv(DFCCint), kBT*dDFCCint))
In this case, as the matrices are isotropic, we can use $\underline D^{-1}$ rather than $\underline D^{-1/2}$ which must be computed via diagonalization.
This tensor has the same energy units as the variable kBT
.
Given the barriers for diffusion, one might have expected that $\underline E^\text{act}$ would be 1, as that is the transition state energy to go from octahedral to tetrahedral. However, the activation barrier is approximately the rate-limiting transition state energy minus the average configuration energy. Since we've chosen a large temperature, the tetrahedral sites have non-negligible occupation, which raises the average energy. As the temperature decreases, the activation energy will approach 1.
The derivative with respect to strain is the fourth-rank elastodiffusivity tensor $\underline d$, where $$d_{abcd} = \frac{dD_{ab}}{d\varepsilon_{cd}}$$
This is returned by the elastodiffusion
function, which requires the elastic dipole tensors be included in the function call as well. The elastic dipoles have the same units of energies, and so are input as $\beta\underline P$, which is unitless. The returned tensor has the same units as the diffusivity.
The activation volume tensor (logarithmic derivative of diffusivity with respect to stress) can be computed from the elastodiffusivity tensor if the compliance tensor $\underline S$ is known; then, $$V^\text{act}_{abcd} = k_\text{B}T \sum_{ijkl=1}^3 (\underline D^{-1/2})_{ai} d_{ijkl} (\underline D^{-1/2})_{bj} S_{klcd}$$
The units of this quantity are given by the units of $k_\text{B}T$ (energy) multiplied by the units of $\underline S$ (inverse pressure). Typically, $k_\text{B}T$ will be known in eV and $\underline S$ in GPa$^{-1}$, so the conversion factor $$1\text{ eV}\cdot\text{GPa}^{-1} = 1.6022\times10^{-19}\text{ J}\cdot10^{-9}\text{ m}^3/\text{J} = 0.16022\text{ nm}^3 = 160.22\text{ A}^3$$
can be useful.
The interstitial diffuser outputs a diffusivity tensor that has the units of squared length based on the lengths in the corresponding Crystal
, and inverse time units corresponding to the rates that are given as input: the ratio of transition state prefactors to configuration prefactors. In a crystalline system, it is typical to specify the lattice vectors in either nm ($10^{-9}\text{ m}$) or Å ($10^{-10}\text{ m}$), and the prefactors of rates are often THz ($10^{12}\text{ s}$). The quantities L0vv
, Lss
, Lsv
, and L1vv
output by the Lij
function all have the units of area/time, so the the conversion factors below are often useful:
$$1\text{ nm}^2\cdot\text{THz} = 10^{-6}\text{ m}^2/\text{s} = 10^{-2}\text{ cm}^2/\text{s}$$
To convert the four quantities into $\underline L^\text{vv}$, $\underline L^\text{ss}$, and $\underline L^\text{sv}$, some additional information is required.
First, in the dilute limit, $\underline L^\text{ss}$ and $\underline L^\text{sv}$ are proportional to $(k_\text{B}T)^{-1}c^\text{v}c^\text{s}$; none of these quantities are known to the diffuser, and the two concentrations are essentially independent variables that must be supplied. The concentrations in these cases are fractional concentrations, not per volume. Finally, if the Onsager coefficients are for chemical potential specified as energies (not energies per volume), the quantities need to be divided by the volume per atom, and the final quantity has the appropriate units. Hence,
Lss*(solute concentration)*(vacancy concentration)/(volume)/kBT
Lsv*(solute concentration)*(vacancy concentration)/(volume)/kBT
where the concentration quantities are fractional.
The vacancy $\underline L^\text{vv}$ is more complicated, as it has a leading order term that is independent of solute, and a first order correction that is linear in the solute concentration. Hence,
(L0vv + L1vv*(solute concentration))*(vacancy concentration)/(volume)/kBT
In [51]:
print(np.dot(Lsv, np.linalg.inv(Lss)))
The vacancy wind factor $G=\underline L^\text{As}(\underline L^\text{ss})^{-1}$ is related to the drag ratio by simple transformations.
The solute diffusivity can also be computed for the dilute limit as well. The general relation between $\underline D^\text{s}$ and the Onsager transport coefficients is $$\underline D^\text{s} = k_\text{B}T\Omega\left\{(c^\text{s})^{-1}\underline L^\text{ss} - (1-c^\text{s}-c^\text{v})^{-1}\underline L^\text{As}\right\}\left(1+\frac{d\ln\gamma^\text{s}}{d\ln c^\text{s}}\right)$$
where $\Omega$ is the volume per atom and $\gamma^\text{s}$ is the solute activity: $$\mu^\text{s} = \mu^\text{s}_0 + k_\text{B}T\ln\left(\gamma^\text{s}c^\text{s}/c^\text{s}_0\right)$$
In the dilute limit, $\gamma^\text{s}\to 1$, and thus $$\underline D^\text{s} = k_\text{B}T\Omega(c^\text{s})^{-1}\underline L^\text{ss}$$
Conveniently, this cancels most of the "missing" prefactors we put in to compute the Onsager coefficient; hence,
Lss*(vacancy concentration)
where the concentration quantities are fractional. In the case of global thermal equilibrium, the vacancy concentration is the equilibrium concentration $\exp(-(E^\text{v}_\text{form} - TS^\text{v}_\text{form})/k_\text{B}T)$.
A similar argument holds for the vacancy diffusivity in the dilute limit
L0vv + (solute concentration)*L1vv
The off-diagonal diffusivity terms are more complex as (1) they are non-symmetric ($\underline D^\text{sv} \ne \underline D^\text{vs}$), and (2) the vacancy-dependency of the solute activity and the solute-dependence of the vacancy activity needs to be known to properly include thermodynamic factors.