Exact equation from the source code:
population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum())
tau_dict[linename] = (population_upperstate /
(1. + np.exp(-h*frq/(kb*tkin) ))*ccms**2 /
(8*np.pi*frq**2) * aval *
(1-np.exp(-h*frq/(kb*tex))) /
(width/ckms*frq*np.sqrt(2*np.pi)) )
Equation 16 from Rosolowsky et al 2008:
$$N(1,1) = \frac{8 \pi k \nu_0^2}{h c^3} \frac{1}{A_{1,1}} \sqrt{2\pi}\sigma_\nu (T_{ex}-T_{bg})\tau$$Rearranges to:
$$\tau = N(1,1) \frac{h c^3}{8\pi k \nu_0^2} A_{1,1} \frac{1}{\sqrt{2 \pi} \sigma_\nu} \left(T_{ex}-T_{bg}\right)^{-1}$$Equation A4 of Friesen et al 2009:
$$N(1,1) = \frac{8\pi\nu^2}{c^2} \frac{g_1}{g_2} \frac{1}{A_{1,1}} \frac{1+\exp\left(-h\nu_0/k_B T_{ex}\right)}{1-\exp\left(-h \nu_0/k_B T_{ex}\right)} \int \tau(\nu) d\nu$$Equation 98 of Mangum & Shirley 2015:
$$N_{tot} = \frac{3 h}{8 \pi \mu^2 R_i} \frac{J_u(J_u+1)}{K^2} \frac{Q_{rot}}{g_J g_K g_I} \frac{\exp{E_u/k_B T_{ex}}}{\exp{h \nu/k_B T_{ex}} - 1} \left[\frac{\int T_R dv}{f\left(J_\nu(T_{ex})-J_\nu{T_B}\right) }\right]$$Excitation temperature: $$T_{ex} \equiv \frac{h\nu_0/k_b}{\ln \frac{n_l g_u}{n_u g_l} } $$
$\nu_0$ = rest frequency of the line
Rearranges to: $$ \frac{n_l g_u}{n_u g_l} = \exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$
Boltzman distribution: $$ \frac{n_u}{n_l} = \frac{g_u}{g_l} \exp\left(\frac{-h \nu_0}{k_B T}\right)$$ where T is a thermal equilibrium temperature
Rearranges to: $$ 1-\frac{n_u g_l}{n_l g_u} = 1-\exp\left(\frac{-h \nu_0}{k_B T}\right)$$
Column Density $$N_u \equiv \int n_u ds$$ $$N_l \equiv \int n_l ds$$
Starting to substitute previous equations into each other:
$$\tau_\nu d\nu= \alpha_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left(1-\frac{g_l n_u}{g_u n_l}\right) \phi_\nu d\nu$$$$\frac{g_u}{g_l}N_l = N_u\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)$$First substitution is the Boltzmann distribution, with $T_{ex}$ for T $$\int \tau_\nu d\nu = \int \frac{c^2}{8\pi\nu_0^2} \frac{g_u}{g_l} n_l A_{ul} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right] \phi_\nu d\nu $$
Second is the $N_l$ - $N_u$ relation: $$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u\left[\exp\left(\frac{h \nu_0}{k_B T_{ex}}\right)\right] \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T}\right) \right] \int \phi_\nu d\nu $$
Then some simplification: $$\int \tau_\nu d\nu = \frac{c^2}{8\pi\nu_0^2} A_{ul} N_u \left[ \exp\left(\frac{h \nu_0}{k_B T}\right) - 1 \right] \int \phi_\nu d\nu $$
Becomes, via some manipulation, equation 29 of Mangum & Shirley 2015: $$N_u = \frac{3 h c}{8\pi^3 \nu \left|\mu_{lu}\right|^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right]^{-1} \int \tau_\nu d\nu$$ where I have used $T_{ex}$ instead of $T$ here because that is one of the substitutions invoked (quietly) in their derivation. There is some sleight-of-hand regarding assuming $N_l = n_l$ that essentially assumes $T_{ex}$ is constant along the line of sight, but that is fine.
(Equation 30 is the same as this one, but with $dv$ instead of $d\nu$ units)
Solve for tau again (because that's what's implemented in the code):
$$\mathrm{"tau"} = \int \tau_\nu d\nu = N_u \frac{ c^2 A_{ul}}{8\pi\nu_0^2} \left[\exp\left(\frac{h\nu}{k_B T_{ex}}\right) -1\right] $$The key difference from Erik's derivation is that this is $N_u$, but he has defined $N_{(1,1)}= N_u + N_l$.
So, we get $N_l$ the same way as above: $$N_l = \frac{8\pi\nu_0^2}{c^2} \frac{g_l}{g_u} A_{ul}^{-1} \left[ 1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right) \right]^{-1} \int \tau d\nu$$
Added together: $$N_u + N_l = \frac{3 h c}{8 \pi^3 \nu \left|\mu_{lu}\right|^2} \frac{\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)}{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} \int \tau d\nu$$
We can solve that back for tau, which is what Erik has done:
$$\int \tau d\nu = (N_u + N_l) \frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {\frac{g_l}{g_u} +\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$$$=(N_u + N_l) \frac{g_u}{g_l}\frac{8 \pi^3 \nu \left|\mu_{lu}\right|^2}{3 h c} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$now identical to Erik's equation.
This is actually a problem, because $N_u$ is related to $N_{tot}$ via the partition function, but there is some double-counting going on if we try to relate $N_{(1,1)}$ to $N_{tot}$ with the same equation.
So, to reformulate the equations in pyspeckit using the appropriate values, we want to use both the partition function (calculated using $T_{kin}$) and $N_u$.
Eqn 31:
$$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$is implemented correctly in pyspeckit:
population_upperstate = lin_ntot * orthoparafrac * partition/(Z.sum())
where partition is
...so I'm assuming (haven't checked) that $E_u = h (B_0 J (J+1) + (C_0-B_0)J^2)$
Note that the leading "2" above cancels out in the Z/sum(Z), so it doesn't matter if it's right or not. I suspect, though, that the 2 belongs in front of both the para and ortho states, but it should be excluded for the J=0 case.
(Note May 16, 2018: I believe this was incorporated into the above analysis)
EWR: The above equation is problematic because it relates the total column density to the $(J,J)$ state which is the equivalent of the $N_{(1,1)}$ term. In the notation above $N_{(1,1)} = N_u + N_l$, so to get this right, you need to consider the inversion transition splitting on top of the total energy of the state so that $$ E_u = h (B_0 J (J+1) + (C_0-B_0)J^2) + \Delta E_{\mathrm{inv}}, g_u = 1 $$ and $$ E_l = h (B_0 J (J+1) + (C_0-B_0)J^2) - \Delta E_{\mathrm{inv}}, g_l = 1 $$ or, since the splitting is small compared to the rotational energy (1 K compared to > 20 K), then
$$Z_J \approx 2 (2J + 1) \exp\left[ \frac{ -h (B_0 J (J+1) + (C_0-B_0)J^2)}{k_B T_{\mathrm{rot}}}\right]$$where the leading 2 accounts for the internal inversion states. Since this 2 appears in all the terms, it cancels out in the sum. Note that I have also changed the $T_{\mathrm{kin}}$ to $T_{\mathrm{rot}}$ since these two aren't the same and it is the latter which establishes the level populations.
Returning to the above, I would then suggest
$$N_{(J,J)} = N_{tot} \frac{Z_J}{\sum_j Z_j} $$May 16, 2018: https://github.com/pyspeckit/pyspeckit/blob/725746f517e9bdcc22b83f4f9d6c9b8666e0a99e/pyspeckit/spectrum/models/ammonia.py
In this version, we compute the optical depth with the code:
for kk,nuo in enumerate(nuoff):
tauprof_ = (tau_dict[linename] * tau_wts[kk] *
np.exp(-(xarr.value+nuo-lines[kk])**2 /
(2.0*nuwidth[kk]**2)))
if return_components:
components.append(tauprof_)
tauprof += tauprof_
The total tau is normalized such that $\Sigma(\tau_{hf})_{hf} = \tau_{tot}$ for each line, i.e., the hyperfine $\tau$s sum to the tau value specified for the line.
The question Nico raised is, should we be computing the synthetic spectrum as $1-e^{\Sigma(\tau_{hf,\nu})}$ or $\Sigma(1-e^{\tau_{hf,\nu}})$?
The former is correct: we only have one optical depth per frequency bin. It doesn't matter what line the optical depth comes from.
In [1]:
# This is a test to show what happens if you add lines vs. computing a single optical depth per channel
from pyspeckit.spectrum.models.ammonia_constants import (line_names, freq_dict, aval_dict, ortho_dict,
voff_lines_dict, tau_wts_dict)
from astropy import constants
from astropy import units as u
import pylab as pl
linename = 'oneone'
xarr_v = (np.linspace(-25,25,1000)*u.km/u.s)
xarr = xarr_v.to(u.GHz, u.doppler_radio(freq_dict['oneone']*u.Hz))
tauprof = np.zeros(xarr.size)
true_prof = np.zeros(xarr.size)
width = 0.1
xoff_v = 0
ckms = constants.c.to(u.km/u.s).value
pl.figure(figsize=(12,12))
pl.clf()
for ii,tau_tot in enumerate((0.001, 0.1, 1, 10,)):
tau_dict = {'oneone':tau_tot}
voff_lines = np.array(voff_lines_dict[linename])
tau_wts = np.array(tau_wts_dict[linename])
lines = (1-voff_lines/ckms)*freq_dict[linename]/1e9
tau_wts = tau_wts / (tau_wts).sum()
nuwidth = np.abs(width/ckms*lines)
nuoff = xoff_v/ckms*lines
# tau array
tauprof = np.zeros(len(xarr))
for kk,nuo in enumerate(nuoff):
tauprof_ = (tau_dict[linename] * tau_wts[kk] *
np.exp(-(xarr.value+nuo-lines[kk])**2 /
(2.0*nuwidth[kk]**2)))
tauprof += tauprof_
true_prof += (1-np.exp(-tauprof_))
ax = pl.subplot(4,1,ii+1)
ax.plot(xarr_v, 1 - np.exp(-tauprof), label=str(tau_tot), zorder=20, linewidth=1)
ax.plot(xarr_v, true_prof, label=str(tau_tot), alpha=0.7, linewidth=2)
ax.plot(xarr_v, true_prof-(1-np.exp(-tauprof)) - tau_tot/20, linewidth=1)
pl.title(str(tau_tot))
In [2]:
from astropy import units as u
from astropy import constants
freq = 23*u.GHz
def tau_wrong(tkin, tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tkin)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
def tau_right(tex):
return (1-np.exp(-constants.h * freq/(constants.k_B*tex)))/(1+np.exp(-constants.h * freq/(constants.k_B*tex)))
In [3]:
tkin = np.linspace(5,40,101)*u.K
tex = np.linspace(5,40,100)*u.K
In [4]:
grid = np.array([[tau_wrong(tk,tx)/tau_right(tx) for tx in tex] for tk in tkin])
In [5]:
%matplotlib inline
import pylab as pl
In [6]:
pl.imshow(grid, cmap='hot', extent=[5,40,5,40])
pl.xlabel("Tex")
pl.ylabel("Tkin")
pl.colorbar()
pl.contour(tex, tkin, grid, levels=[0.75,1,1/0.75], colors=['w','w','k'])
Out[6]:
So the error could be 50%-700% over a somewhat reasonable range. That's bad, and it affects the temperature estimates. However, the effect on temperature estimates should be pretty small, since each line will be affected in the same way. The biggest effect will be on the column density.
But, is this error at all balanced by the double-counting problem?
Because we were using the partition function directly, it's not obvious. I was assuming that we were using the equation with $N_u$ as the leader, but we were using $N_u+N_l$. i.e., I was using this equation:
$$\int \tau d\nu =(N_u + N_l) \frac{g_u}{g_l}\frac{A_{ul}c^2}{8\pi\nu_0^2} \frac{1-\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} {1 +\frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T_{ex}}\right)} $$but with $N_u$ in place of $N_u + N_l$.
The magnitude of the error can therefore be estimated by computing $(N_u+N_l)/N_u = 1 + \frac{N_l}{N_u}$.
We can use the Boltzmann distribution to compute this error, then: $$ \frac{n_u}{n_l} = \frac{g_u}{g_l}\exp\left(\frac{-h \nu_0}{k_B T}\right)$$
In [7]:
def nunlnu_error(Tkin):
return 1+np.exp(-constants.h * freq / (constants.k_B * Tkin))
In [8]:
pl.plot(tkin.value, nunlnu_error(tkin))
Out[8]:
So we were always off by a factor very close to 2. The relative values of $\tau$ should never have been affected by this issue.
It will be more work to determine exactly how much the T_K and column estimates were affected.
Comparing Trot and Tkin. If we start with the equation that governs level populations,
$$N_u = N_{tot} \frac{g_u}{Q_{rot}} \exp\left(\frac{-E_u}{k_B T_{kin}}\right)$$we get
$$N_u / N_l = \frac{g_u}{g_l} \exp\left(\frac{-E_u}{k_B T_{kin}} + \frac{E_l}{k_B T_{kin}}\right)$$where we really mean $T_{rot}$ instead of $T_{kin}$ here as long as we're talking about just two levels. This gives us a definition
which is the rotational temperature for a two-level system... which is just a $T_{ex}$, but governing non-radiatively-coupled levels.
So, for example, if we want to know $T_{rot}$ for the 2-2 and 1-1 lines at $n=10^4$ and $T_{kin}=20$ K:
In [9]:
from pyradex import Radex
from astropy import constants, units as u
R = Radex(species='p-nh3', column=1e13, collider_densities={'pH2':1e4}, temperature=20)
tbl = R(collider_densities={'ph2': 1e4}, temperature=20, column=1e13)
In [10]:
tbl[8:10]
Out[10]:
In [11]:
# we're comparing the upper states since these are the ones that are emitting photons
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
In [12]:
trot
Out[12]:
In [13]:
tbl['Tex'][8:10].mean()
Out[13]:
In [14]:
dT_oneone = -(constants.h * u.Quantity(tbl['frequency'][8], u.GHz)/constants.k_B).to(u.K)
print("delta-T for 1-1_upper - 1-1_lower: {0}".format(dT_oneone))
tex = (dT_oneone *
np.log((tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[8]) /
(tbl['lowerlevelpop'][8] * R.upperlevel_statisticalweight[8]))**-1
)
print("Excitation temperature computed is {0} and should be {1}".format(tex.to(u.K), tbl['Tex'][8]))
Swift et al 2005 eqn A6
$$T_R = T_K \left[ 1 + \frac{T_K}{T_0} \ln \left[1+0.6\exp\left( -15.7/T_K \right)\right] \right]^{-1}$$where $T_0=41.18$ K
In [15]:
T0=tbl['upperstateenergy'][9]-tbl['upperstateenergy'][8]
T0
Out[15]:
In [16]:
def tr_swift(tk, T0=T0):
return tk*(1+tk/T0 * np.log(1+0.6*np.exp(-15.7/tk)))**-1
Note that the approximation "works" - gets something near 20 - for positive or negative values of T0 (but see below)
In [17]:
tr_swift(20, T0=-41.18)
Out[17]:
In [18]:
tr_swift(20, T0=41.18)
Out[18]:
In [19]:
tr_swift(20, T0=41.5)
Out[19]:
In [20]:
def trot_radex(column=1e13, density=1e4, tkin=20):
tbl = R(collider_densities={'ph2': density}, temperature=tkin, column=column)
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
return trot
RADEX suggests that the positive T0 value is the correct one (the negative one appeared correct when incorrectly indexed statistical weights were being used)
In [21]:
trot_radex(tkin=20)
Out[21]:
In [22]:
def tex_radex(column=1e13, density=1e4, tkin=20, lineno=8):
""" used in tests below """
tbl = R(collider_densities={'ph2': density}, temperature=tkin, column=column)
return tbl[lineno]['Tex']
In [23]:
%matplotlib inline
import pylab as pl
In [24]:
cols = np.logspace(12,15)
trots = [trot_radex(column=c).to(u.K).value for c in cols]
pl.semilogx(cols, trots)
pl.hlines(tr_swift(20), cols.min(), cols.max(), color='k')
pl.xlabel("Column")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
Out[24]:
In [25]:
densities = np.logspace(3,9)
trots = [trot_radex(density=n).to(u.K).value for n in densities]
pl.semilogx(densities, trots)
pl.hlines(tr_swift(20), densities.min(), densities.max(), color='k')
pl.xlabel("Volume Density")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
Out[25]:
This is the plot that really convinces me that the negative (black curve) value of T0 is the appropriate value to use for this approximation
In [26]:
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
pl.plot(temperatures, trots)
# wrong pl.plot(temperatures, tr_swift(temperatures, T0=-41.18), color='k')
pl.plot(temperatures, tr_swift(temperatures, T0=41.18), color='r')
pl.xlabel("Temperatures")
pl.ylabel("$T_{rot} (2-2)/(1-1)$")
Out[26]:
In [27]:
temperatures = np.linspace(5,40,50)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
pl.plot(temperatures, np.abs(trots-tr_swift(temperatures, T0=41.18))/trots)
pl.xlabel("Temperatures")
pl.ylabel("$(T_{rot}(\mathrm{RADEX}) - T_{rot}(\mathrm{Swift}))/T_{rot}(\mathrm{RADEX})$")
Out[27]:
In [28]:
from pyspeckit.spectrum.models.tests import test_ammonia
from pyspeckit.spectrum.models import ammonia
In [29]:
tkin = 20*u.K
trot = trot_radex(tkin=tkin)
print(trot)
In [30]:
spc = test_ammonia.make_synthspec(lte=False, tkin=None, tex=6.66, trot=trot.value, lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
print("For Tkin={1} -> Trot={2}, pyspeckit's cold_ammonia fitter got:\n{0}".format(spc.specfit.parinfo, tkin, trot))
In [31]:
spc.specfit(fittype='cold_ammonia', guesses=[22.80, 6.6, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
bestfit_coldammonia_temperature = spc.specfit.parinfo[0]
print("The best fit cold ammonia temperature is {0} for an input T_rot={1}".format(bestfit_coldammonia_temperature, trot))
If we use the exact tex for each line in the input model, in principle, the resulting fitted temperature should be more accurate. However, at present, it looks dramatically incorrect
In [32]:
tex11 = tex_radex(tkin=tkin, lineno=8)
tex22 = tex_radex(tkin=tkin, lineno=9)
print("tex11={0}, tex22={1} for tkin={2}, trot={3}".format(tex11,tex22,tkin,trot))
In [33]:
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
print("For Tkin={1} -> Trot={2}, pyspeckit's cold_ammonia fitter got:\n{0}"
.format(spc.specfit.parinfo, tkin, trot))
print("The best fit cold ammonia temperature is {0} for an input T_rot={1}"
.format(bestfit_coldammonia_temperature, trot))
In a previous iteration of the ammonia model, there was a big (and incorrect) difference between the synthetic spectra from ammonia and cold_ammonia. This is now something of a regression test for that error, which turned out to be from yet another incorrect indexing of the degeneracy.
In [34]:
tkin = 20*u.K
trot = trot_radex(tkin=tkin)
dT0=41.18
print(tkin * (1 + (tkin.value/dT0)*np.log(1 + 0.6*np.exp(-15.7/tkin.value)))**-1)
print("tkin={0} trot={1} tex11={2} tex22={3}".format(tkin, trot, tex11, tex22))
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc_666 = test_ammonia.make_synthspec(lte=False, tkin=None,
tex=6.66,
trot=trot.value,
lines=['oneone','twotwo'])
# this one is guaranteed different because tex = trot
spc_cold = test_ammonia.make_synthspec_cold(tkin=tkin.value,
lines=['oneone','twotwo'])
In [35]:
spc[0].plotter(linewidth=3, alpha=0.5)
spc_666[0].plotter(axis=spc[0].plotter.axis, clear=False, color='r', linewidth=1, alpha=0.7)
spc_cold[0].plotter(axis=spc[0].plotter.axis, clear=False, color='b', linewidth=1, alpha=0.7)
The red and black look too different to me; they should differ only by a factor of (tex11-6.66)/6.66 or so. Instead, they differ by a factor of 5-6.
In [36]:
spc[0].data.max(), spc_666[0].data.max()
Out[36]:
In [37]:
spc[1].plotter()
spc_666[1].plotter(axis=spc[1].plotter.axis, clear=False, color='r')
spc_cold[1].plotter(axis=spc[1].plotter.axis, clear=False, color='b')
In [38]:
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
tex11s = np.array([tex_radex(tkin=t, lineno=8) for t in temperatures])
tex22s = np.array([tex_radex(tkin=t, lineno=9) for t in temperatures])
pl.plot(trots, tex11s)
pl.plot(trots, tex22s)
#pl.plot(tr_swift(temperatures), color='k')
pl.ylabel("$T_{ex}$")
pl.xlabel("$T_{rot} (2-2)/(1-1)$")
Out[38]:
Apparently there are some discreteness problems but the ratio changes very little.
In [39]:
temperatures = np.linspace(5,40)
trots = [trot_radex(tkin=t).to(u.K).value for t in temperatures]
tex11s = np.array([tex_radex(tkin=t, lineno=8) for t in temperatures])
tex22s = np.array([tex_radex(tkin=t, lineno=9) for t in temperatures])
pl.plot(trots, tex11s/tex22s)
#pl.plot(tr_swift(temperatures), color='k')
pl.ylabel("$T_{ex} (2-2)/(1-1)$")
pl.xlabel("$T_{rot} (2-2)/(1-1)$")
Out[39]:
In [40]:
from pyspeckit.spectrum.models.tests import test_ammonia
test_ammonia.test_ammonia_parlimits()
test_ammonia.test_ammonia_parlimits_fails()
test_ammonia.test_cold_ammonia()
test_ammonia.test_self_fit()
In [41]:
temperatures = np.array((10,15,20,25,30,35,40))
recovered_tkin = {}
recovered_column = {}
for tkin in temperatures:
tbl = R(collider_densities={'ph2': 1e4}, temperature=tkin, column=1e13)
tex11 = tbl['Tex'][8]
tex22 = tbl['Tex'][9]
trot = (u.Quantity(tbl['upperstateenergy'][8]-tbl['upperstateenergy'][9], u.K) *
np.log((tbl['upperlevelpop'][9] * R.upperlevel_statisticalweight[8]) /
(tbl['upperlevelpop'][8] * R.upperlevel_statisticalweight[9]))**-1
)
spc = test_ammonia.make_synthspec(lte=False, tkin=None,
tex={'oneone':tex11, 'twotwo':tex22},
trot=trot.value,
lines=['oneone','twotwo'])
spc.specfit.Registry.add_fitter('cold_ammonia',ammonia.cold_ammonia_model(),6)
spc.specfit(fittype='cold_ammonia', guesses=[23, 5, 13.1, 1, 0.5, 0],
fixed=[False,False,False,False,False,True])
recovered_tkin[tkin] = spc.specfit.parinfo['tkin0'].value
recovered_column[tkin] = spc.specfit.parinfo['ntot0'].value
In [42]:
pl.xlabel("$T_K$")
pl.ylabel("Fitted $T_K$ from cold_ammonia")
pl.plot(recovered_tkin.keys(), recovered_tkin.values(), 'o')
pl.plot(temperatures, temperatures)
In [ ]:
pl.xlabel("$T_K$")
pl.ylabel("$|T_K-T_{fit}|/T_K$")
inp = np.array(list(recovered_tkin.keys()), dtype='float')
rslt = np.array(list(recovered_tkin.values()), dtype='float')
pl.plot(inp, np.abs(rslt-inp)/rslt, 'o')
In [ ]:
pl.xlabel("$N(NH_3)$")
pl.ylabel("Fitted $N(NH_3)$ from cold_ammonia")
pl.plot(recovered_column.keys(), recovered_column.values(), 'o')
pl.plot(temperatures, temperatures*0+13)
In [ ]: