In [14]:
import buqeyemodel as bq
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import scipy as sp
%matplotlib inline

Bayesian truncation errors in chiral effective field theory: A Recap

Here we reproduce, using the improved truncation error model based on conjugate priors, some of the results in the following papers

  • Furnstahl et al., Quantifying Truncation Errors in Effective Field Theory
  • Melendez et al., Bayesian truncation errors in chiral effective field theory: Nucleon-nucleon observables

In these papers, the convergence pattern of nucleon-nucleon scattering observables, including the total and differential cross sections and a set of spin scattering observables, were studied to infer the effective field theory (EFT) truncation error. Given the $k$ lowest order EFTs, a sequence of observable calculations $\\{X_0, ..., X_k\\}$ can be computed for a generic observable $X$. It is assumed in the above papers that one can write the sum of all contributions as \begin{align} X = X_{\mathrm{ref}}\sum_{n=0}^\infty c_n Q^n \end{align} for iid observable coefficients $c_n$. Hence, the $k$ lowest orders can be conditioned upon to estimate the remaining higher orders, and the full summation. The expansion parameter $Q$ is considered a ratio of low- and high-energy scales, with the low energy scale a combination of the relative scattering momentum $p$ and pion mass $m_\pi$, and the high energy scale is $\Lambda_b$, also known as the breakdown scale. The specific parameterization is \begin{align} Q = \frac{m_\pi^n + p^n}{m_\pi^{n-1} + p^{n-1}} \frac{1}{\Lambda_b} \end{align} with $n=8$ used in Melendez et al.

The hierarchical model from the above papers considered the following prior sets, and Melendez et al. focused on set C with $\bar c_< = 0.25$ and $\bar c_> = 10$.

\begin{array}{ccc} \hline \mathrm{Set} & pr(c_n | \bar c) & pr(\bar c) \\ \hline A & \frac{1}{2\bar c}\theta(\bar c - |c_n|) & \frac{1}{\bar c\ln\bar c_> /\bar c_<}\theta(\bar c - \bar c_<) \theta(\bar c_> - \bar c) \\ B & \frac{1}{2\bar c}\theta(\bar c - |c_n|) & \frac{1}{\sqrt{2\pi}\bar c \sigma} e^{-(\ln \bar c)^2/2\sigma^2} \\ C & \frac{1}{\sqrt{2\pi}\bar c} e^{-c_n^2/2\bar c} & \frac{1}{\bar c\ln\bar c_> /\bar c_<}\theta(\bar c - \bar c_<) \theta(\bar c_> - \bar c) \\ \hline \end{array}

This package instead employs a conjugate prior set, where the $c_n$ are Gaussian and an inverse gamma prior is placed on $\bar c^2$, that is, $\bar c^2 \sim IG(a, b)$. Often $a$ and $b$ are called the shape and scale parameters of the IG distribution, respectively. The inverse gamma density is given by \begin{align} IG(x; a, b) = \frac{b^a}{\Gamma(a)} x^{-a-1} e^{-b/x} \end{align} Here we compare the results of the very convenient conjugate formulation to the prior published results.

Proof of Concept

Many proof of concept tests were performed in Furnstahl et al. Here we reproduce some of those tests with this package. Since setting $a = b = 0$ is equivalent to prior set $C$ with $\bar c_< = 1/\bar c_> = 0$, the results should be identical in this case.

The basic steps for using this uncorrelated model are

  • Define a PowerSeries model object with hyperparameters $a$ and $b$
  • Use the observe method to input the partial sums partials of the observable, with the expansion parameter ratio, and reference scale ref used to extract the coefficients of the series
  • Call another method, such as predictive or predict to get posteriors or degree of belief intervals for the truncation error. Below we use predictive, which returns a frozen scipy $t$ distribution object for the truncation error. Then one can use standard methods (see the scipy.stats documentation) to compute its pdf, credible intervals, etc.

In [15]:
a = 0
b = 0
Q = 0.33

# Must be 2d array, with orders along the 0th axis
# These are [c_0, c_1, c_2]
coeffs1 = np.array([[1.0], [1.0], [1.0]])
coeffs2 = np.array([[1.0], [0.5], [0.1]])
coeffs3 = np.array([[1.0], [0.1], [0.1]])

# PowerSeries accepts the *partial sums*, not the coefficients!
partials1 = bq.partials(coeffs1, ratio=Q)
# Set up model with hyperparameters
test1 = bq.PowerSeries(shape=a, scale=b)
# Input data and the ratio, hyperparameters get updated here
test1.observe(partials=partials1, ratio=Q)
# Get a scipy.stats.t distribution object for the truncation error
dist1 = test1.predictive(rescale=False)


# Repeat for other 2nd set of coefficients
partials2 = bq.partials(coeffs2, ratio=Q)
test2 = bq.PowerSeries(shape=a, scale=b)
test2.observe(partials=partials2, ratio=Q)
dist2 = test2.predictive(rescale=False)


# Repeat for other 3rd set of coefficients
partials3 = bq.partials(coeffs3, ratio=Q)
test3 = bq.PowerSeries(shape=a, scale=b)
test3.observe(partials=partials3, ratio=Q)
dist3 = test3.predictive(rescale=False)


# Plot the results
Deltaks = np.linspace(-0.15, 0.15, 100)
coeffs = [coeffs1, coeffs2, coeffs3]
dists = [dist1, dist2, dist3]
fig, axs = plt.subplots(1, 3, figsize=(12, 3))
for i, ax in enumerate(axs.ravel()):
    dist = dists[i]
    ax.plot(Deltaks, dist.pdf(Deltaks))

    # Plot dob intervals as vertical lines
    dob68 = dist.interval(0.68)
    for dob in dob68:
        ax.plot(dob*np.ones(2), [0, dist.pdf(dob)], c='k', ls='-.', lw=0.5)
    dob95 = dist.interval(0.95)
    for dob in dob95:
        ax.plot(dob*np.ones(2), [0, dist.pdf(dob)], c='k', ls='--', lw=0.5)

    # Format the plot as in Furnstahl et al. for comparison
    ax.set_title(r"pr($\Delta_2$ | c={coeffs})".format(coeffs=coeffs[i].ravel()))
    ax.set_xlabel(r"$\Delta_2$")
    ax.set_ylim([0, 17])
    ax.set_xlim([-0.13, 0.13])
    ax.yaxis.set_major_locator(mpl.ticker.FixedLocator(np.arange(0, 18, 2)))
    ax.yaxis.set_minor_locator(mpl.ticker.FixedLocator(np.arange(1, 19, 2)))
    ax.xaxis.set_minor_locator(mpl.ticker.FixedLocator(np.linspace(-0.15, 0.15, 31)))
    # ax.grid(axis='y')


Compare the above figure with the blue curves from Fig. 4 in Furnstahl et al., reproduced below:

| | | | | - | - | - | | | | |

The original paper states that Fig. 4 uses the "leading-omitted-term approximation" to the truncation error, but this is a misprint. Thus, the curves are indeed identical. Of course, this package allows any $a$ and $b$ to be chosen instead, each of which allow pdfs of $\Delta_k$ to be computed efficiently without any manual integration.

NN Scattering Observables

Although the total cross section was covered in Furnstahl et al., it and other observables were more extensively studied in Melendez et al. Here we will reproduce some of the key figures from Melendez et al. with a slightly altered prior on $\bar c$.

Choose Hyperparameters

First let's figure out the hyperparameters of the inverse gamma distribution that best reproduce the "Set C" prior with $\bar c_> = 10$ and $\bar c_< = 0.25$, which was the most extensively used prior set of Melendez et al.


In [16]:
ig = sp.stats.invgamma(a=0.3, scale=0.2)

def f(x):
    # pr(cbar**2) for set C between 0.25 and 10
    if 0.25 <= np.sqrt(x) <= 10:
        return 1/(2 * np.log(10/0.25) * x)
    else:
        return 0

cbarsq = np.linspace(0.1**2, 1, 1000)
ar = np.array([f(i) for i in cbarsq])

# plt.semilogy(cbarsq, ig.pdf(cbarsq), label='ig')
# plt.semilogy(cbarsq, ar, label='Set C')

plt.plot(cbarsq, ig.pdf(cbarsq), label='ig')
plt.plot(cbarsq, ar, label='Set C')

ax = plt.gca()
ax.set_xlabel(r"$\bar c^2$")
ax.set_ylabel(r"$pr(\bar c^2)$")

plt.legend();


It looks like $a = 0.3$ and $b \in (0.07, 0.2)$ work nicely! Let's see if we can reproduce old results now.

Setup

Observables are considered at many values of the kinematic parameters $E_{\mathrm{lab}}$ and $\theta$. The expansion parameter is assumed to vary in energy so we must provide a callable function rather than a constant as before. Thus we will first define the ratio function that will be used as the expansion parameter.


In [17]:
# Constants: proton/neutron masses and hbar
m_p = 938.27208  # MeV/c^2
m_n = 939.56541  # MeV/c^2
hbarc = 197.33  # Mev-fm


def E_to_p(E_lab, interaction):
    """Return p in MeV.

    Parameters
    ----------
    energy      = float
                  lab energy given in MeV.
    interaction = str
                  {"pp", "nn", "np"}
    """

    if interaction == "pp":
        m1, m2 = m_p, m_p
    if interaction == "nn":
        m1, m2 = m_n, m_n
    if interaction == "np":
        m1, m2 = m_n, m_p
    p_rel = np.sqrt(
        E_lab * m2**2 * (E_lab + 2 * m1) /
        ((m1 + m2)**2 + 2 * m2 * E_lab)
        )
    return p_rel


def Q_approx(E, Lambda_b, interaction, single_expansion=False):
    if single_expansion:
        m_pi = 0
    else:
        m_pi = 138  # Set to 0 to just return p/Lambda_b
    # Interpolate to smooth the transition from m_pi to p
    n = 8
    p = E_to_p(E, interaction)
    q = (m_pi**n + p**n) / (m_pi**(n-1) + p**(n-1)) / Lambda_b
    return q


def ratio(X, Lambda_b):
    '''Assume energies are in the first column of X'''
    return Q_approx(X[:, 0], Lambda_b, interaction='np').ravel()

Now import the relevant data for each observable using the wonderful pandas package. Here we use Epelbaum, Krebs, and Meißner's $R=0.9$ fm potential up to N$^4$LO as the order-by-order predictions, and the Nigmegan partial wave analysis as experimental data.


In [18]:
dir_path = '../data/'
pot_type = 'kvnn_41'
observable = 'diff_cross_sec'
filename = observable + '_' + pot_type + '.csv'

total_df = pd.read_csv(
    filepath_or_buffer=os.path.join(dir_path, 'cross_sec_' + pot_type + '.csv'),
    index_col=['Energy']
    )
diff_df = pd.read_csv(
    filepath_or_buffer=os.path.join(dir_path, 'diff_cross_sec_' + pot_type + '.csv'),
    index_col=['Energy', 'theta']
    )
Ay_df = pd.read_csv(
    filepath_or_buffer=os.path.join(dir_path, 'Ay_' + pot_type + '.csv'),
    index_col=['Energy', 'theta']
    )
D_df = pd.read_csv(
    filepath_or_buffer=os.path.join(dir_path, 'D_' + pot_type + '.csv'),
    index_col=['Energy', 'theta']
    )
A_df = pd.read_csv(
    filepath_or_buffer=os.path.join(dir_path, 'A_' + pot_type + '.csv'),
    index_col=['Energy', 'theta']
    )
Axx_df = pd.read_csv(
    filepath_or_buffer=os.path.join(dir_path, 'Axx_' + pot_type + '.csv'),
    index_col=['Energy', 'theta']
    )
Ayy_df = pd.read_csv(
    filepath_or_buffer=os.path.join(dir_path, 'Ayy_' + pot_type + '.csv'),
    index_col=['Energy', 'theta']
    )
idx = pd.IndexSlice

# Nigmegan Partial Wave Analysis, which is used as experimental data
total_npwa_df = pd.read_csv('../data/npwa_C_t-t-t-t.dat', sep='\s+', header=None, skiprows=0, index_col=0)
diff_npwa_df = pd.read_csv('../data/npwa_C_0-0-0-0_energy-96.dat', sep='\s+', header=None, skiprows=0, index_col=0)

The reference scale $X_{\mathrm{ref}}$ was chosen to be $X_0$ for the total and differential cross section, but it was argued that $X_{\mathrm{ref}} = 1$ was sufficient for the spin observables. Here we set up interpolators that could be used with any energy or $\theta$ value.


In [19]:
energy_all = np.arange(1, 351, 1)
theta_all = np.arange(1, 180, 1)
Xall = bq.cartesian(energy_all, theta_all)

total_refs = total_df['0'].values
total_ref = sp.interpolate.interp1d(energy_all, total_refs)

diff_refs = diff_df['0'].values
diff_ref = sp.interpolate.LinearNDInterpolator(Xall, diff_refs)

Total Cross Section

In Melendez et al., we produced plots of residuals with statistical error bands for various EFT regulators and also ran model checking tests. The residual is defined as \begin{align} X_{\mathrm{res}} \equiv X_{\mathrm{pred}} - X_{\mathrm{exp}} \end{align} First let's define the values of energies to be used in the analysis and get the imported data


In [20]:
# Input data
total_energy = np.arange(1, 351, 1)
total_partials = total_df.loc[:, idx['0':'5']].values.T

# Corresponding orders
orders = np.array([0, 2, 3, 4, 5])
# Ignore these orders in prediction
rm_orders = np.array([0])

Now set up the hyperparameters that will be used for the model


In [21]:
ratio_kwargs = {'Lambda_b': 600}

# From above testing
a = 0.3
b = 0.2

Let's start by reproducing the Fig. 7 from Melendez et al. Instead of using predictive as before, here we will use predict, which directly provides the mean and set of credible intervals for the truncation error. Since rescale is set to True, the posterior corresponds to $X_k + X_{\mathrm{ref}} \Delta_k$, rather than simply $\Delta_k$. This makes placing error bands on existing data very easy.


In [22]:
# colors = ['orange', 'xkcd:green', 'xkcd:azure', 'xkcd:red']
colors = [
    plt.get_cmap("Oranges")(0.4),
    plt.get_cmap("Greens")(0.6),
    plt.get_cmap("Blues")(0.6),
    plt.get_cmap("Reds")(0.6)
]
rescale = True

fig, axs = plt.subplots(2, 2, figsize=(8, 6), sharex=True)
ax_list = axs.ravel()

ax_list[2].set_xlabel(r'$E_{\mathrm{lab}}$ (MeV)')
ax_list[3].set_xlabel(r'$E_{\mathrm{lab}}$ (MeV)')
fig.suptitle('Total Cross Section Residuals')

for i in range(4):
    max_index = i+2
    sub_orders = orders[:max_index]
    sub_partials = total_partials[:max_index]
    total_cross_sec = bq.PowerSeries(shape=a, scale=b)
    total_cross_sec.observe(
        X=total_energy[:, None], partials=sub_partials, ratio=ratio,
        orders=sub_orders, rm_orders=rm_orders, ref=total_ref(total_energy), **ratio_kwargs)
    pred, pred_int = total_cross_sec.predict(dob=[0.68, 0.95], rescale=rescale)
    if rescale:
        pred = pred - total_npwa_df.loc[total_energy].values.ravel()
        pred_int = pred_int - total_npwa_df.loc[total_energy].values.ravel()

    # Format as in Melendez et al.
    ax = ax_list[i]
    ax.set_ylim([-13, 2])
    ax.set_xlim([0, 350])
    ax.set_yticks([-12, -9, -6, -3, 0])
    ax.axhline(0, ls='--', c='k')
    ax.plot(total_energy, pred, c=colors[i])
    ax.plot(total_energy, pred_int[0].T, ls='--', c='gray', lw=0.7)
    ax.plot(total_energy, pred_int[1].T, ls='--', c='gray', lw=0.5)
    ax.xaxis.set_minor_locator(mpl.ticker.FixedLocator(np.linspace(50, 350, 4)))
    ax.yaxis.set_minor_locator(mpl.ticker.FixedLocator(np.arange(-10.5, 2, 1.5)))
    ax.set_title('N{}LO'.format(i+1))


Compare the above figure with Fig. 7 from Melendez et al.

Differential Cross Section

Now set up differential cross section data


In [23]:
# Input data
diff_energy = np.array([96])
diff_theta = np.arange(1, 180, 1)
X_diff = bq.cartesian(diff_energy, diff_theta)
diff_df_reduced = diff_df.loc[list(map(tuple, X_diff))]
diff_partials = diff_df_reduced.loc[:, idx['0':'5']].values.T

Below we repeat the analysis for the differential cross section


In [24]:
rescale = True

fig, axs = plt.subplots(2, 2, figsize=(8, 6), sharex=True)
ax_list = axs.ravel()

ax_list[2].set_xlabel(r'$\theta$ (deg)')
ax_list[3].set_xlabel(r'$\theta$ (deg)')
fig.suptitle('Differential Cross Section Residuals')

for i in range(4):
    max_index = i+2
    sub_orders = orders[:max_index]
    sub_partials = diff_partials[:max_index]
    diff_cross_sec = bq.PowerSeries(shape=a, scale=b)
    diff_cross_sec.observe(
        X=X_diff, partials=sub_partials, ratio=ratio,
        orders=sub_orders, rm_orders=rm_orders, ref=diff_ref(X_diff), **ratio_kwargs)
    pred, pred_int = diff_cross_sec.predict(dob=[0.68, 0.95], rescale=rescale)
    if rescale:
        pred = pred - diff_npwa_df.loc[diff_theta].values.ravel()
        pred_int = pred_int - diff_npwa_df.loc[diff_theta].values.ravel()

    ax = ax_list[i]
    ax.set_ylim([-5, 2])
    ax.set_xlim([0, 180])
    ax.set_yticks([-4.5, -3.0, -1.5, 0.0, 1.5])
    ax.set_xticks([0, 60, 120, 180])
    ax.axhline(0, ls='--', c='k')
    ax.plot(diff_theta, pred, c=colors[i])
    ax.plot(diff_theta, pred_int[0].T, ls='--', c='gray', lw=0.7)
    ax.plot(diff_theta, pred_int[1].T, ls='--', c='gray', lw=0.5)
    ax.xaxis.set_minor_locator(mpl.ticker.FixedLocator(np.arange(0, 180, 20)))
    ax.yaxis.set_minor_locator(mpl.ticker.FixedLocator(np.arange(-3.75, 2, 1.5)))
    ax.set_title('N{}LO'.format(i+1))


Compare the above figure with Fig. 8 from Melendez et al.:

Model Checking

Two model checking diagnostics were employed in Melendez et al., consistency plots and $\Lambda_b$ posteriors. Consistency plots check whether a $100\alpha\%$ degree of belief interval contains true value of the truncation error approximately $100\alpha\%$ of the time. Validation data can be checked in this manner using the credible_diagnostic method, which computes the average number of points contained in a given credible interval. The data should lie close to the diagonal in the plot below. The 68% and 95% gray uncertainty bands allow for deviations from the diagonal due to the finite set of data.


In [28]:
energy_consistency = np.arange(20, 341, 20)
total_partials_consistency = total_df.loc[energy_consistency, idx['0':'5']].values.T
total_coeffs_consistency = bq.coefficients(
    partials=total_partials_consistency, X=energy_consistency[:, None],
    ref=total_ref(energy_consistency), ratio=ratio, orders=orders, **ratio_kwargs)
N = len(energy_consistency)
band_dobs = np.linspace(0.001, 1, 100)
dobs = np.arange(0.1, 1, 0.1)
beta = False

fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111)
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
ax.plot([0, 1], [0, 1], c='k')
ax.set_xlabel('DoB')
ax.set_ylabel(r'Success Rate, $N={}$'.format(N))
ax.set_title('Total Cross Section Consistency Plot')
consistency_markers = ['s', 'D', 'o']

# Plot data
for i in range(3):
    # Set up data and observable for each order
    total_consistency = bq.PowerSeries(shape=a, scale=b)
    max_index = i+2
    sub_orders = orders[:max_index]
    sub_partials = total_partials_consistency[:max_index]
    total_consistency.observe(
        X=energy_consistency[:, None], partials=sub_partials, ratio=ratio,
        orders=sub_orders, rm_orders=rm_orders, ref=total_ref(energy_consistency), **ratio_kwargs)
    data = total_partials_consistency[max_index]

    # Must use leading order approximation if comparing to next order
    D_CI, bands = total_consistency.credible_diagnostic(
        data=data, dobs=dobs, band_intervals=[0.68, 0.95, 0.99],
        band_dobs=band_dobs, beta=beta, order=orders[max_index], rescale=True)
    # Plot each line
    ax.plot(dobs, D_CI, c=colors[i], marker=consistency_markers[i],
            markeredgecolor='k', markeredgewidth=0.5, markersize=8, label='N{}LO'.format(max_index-1))

# Make gray error bands
if not beta:
    ax.fill_between(band_dobs, bands[0, 0], bands[0, 1], color='gray', alpha=0.25)
    ax.fill_between(band_dobs, bands[1, 0], bands[1, 1], color='gray', alpha=0.25)
else:
    ax.fill_betweenx(band_dobs, bands[0, 0], bands[0, 1], color='gray', alpha=0.25)
    ax.fill_betweenx(band_dobs, bands[1, 0], bands[1, 1], color='gray', alpha=0.25)
ax.legend();


Again, compare this with Fig. 12 of Melendez et al.:

Each curve shows a slight discrepancy (only a few points) with Fig. 12, but that is expected since the priors are not identical between the figures. Now let's move on to pr($\Lambda_b | c$). The $\Lambda_b$ posteriors were computed with uninformative Set C prior, which corresponds exactly to $a = b = 0$ for the inverse gamma prior. Thus, the figures generated below should correspond exactly with those in the paper. Here, the posterior method is used to compute an approximately normalized posterior for the Lambda_b parameter of the ratio function. Any generic keyword parameter can be used instead of Lambda_b so long as it is found in the ratio function. Since the posteriors can be quite skewed, a highest probability density interval is calculated using the HPD_pdf function, rather than a simple symmetric credible interval.


In [26]:
energies = np.array([96, 143, 200, 300])
thetas = np.array([60, 120])
X_Lb = bq.cartesian(energies, thetas)
Lb_colors = colors[2:]
Lambda_b_array = np.arange(1, 1501, 1)

def Lb_logprior(Lambda_b):
    """Melendez et al., Eq. (31)"""
    return np.where(np.logical_and(300 <= Lambda_b, Lambda_b <= 1500), np.log(1/Lambda_b), -np.inf)

# Observables
tot_cross = total_df.loc[energies, idx['0':'5']].values.T
diff_cross = diff_df.loc[list(map(tuple, X_Lb)), idx['0':'5']].values.T
Ay = Ay_df.loc[list(map(tuple, X_Lb)), idx['0':'5']].values.T
A = A_df.loc[list(map(tuple, X_Lb)), idx['0':'5']].values.T
D = D_df.loc[list(map(tuple, X_Lb)), idx['0':'5']].values.T
Axx = Axx_df.loc[list(map(tuple, X_Lb)), idx['0':'5']].values.T
Ayy = Ayy_df.loc[list(map(tuple, X_Lb)), idx['0':'5']].values.T

# Combine spin observables
spin = np.hstack((Ay, A, D, Axx, Ayy))
X_Lb_spin = np.vstack((X_Lb, X_Lb, X_Lb, X_Lb, X_Lb))

# Set up lists for loop
observable_list = [tot_cross, diff_cross, spin]
X_Lb_list = [energies, X_Lb, X_Lb_spin]
ref_list = [total_ref, diff_ref, lambda x: 1]
names = ['Total Cross Section', 'Differential Cross Section', 'Spin Observables']

# Begin loop for plotting
fig, axs = plt.subplots(2, 3, figsize=(12, 3.5), sharex=True)
for j, obs_j in enumerate(observable_list):
    xlb = X_Lb_list[j]
    ref_x = xlb
    if xlb.ndim == 1:
        xlb = xlb[:, None]
    for i, max_ind in enumerate(range(4, 6)):
        ax = axs[i, j]
        obs = obs_j[:max_ind]
        ordrs = orders[:max_ind]
        observable = bq.PowerSeries(shape=0, scale=0)
        observable.observe(X=xlb, partials=obs, ratio=ratio,
                orders=ordrs, rm_orders=rm_orders, ref=ref_list[j](ref_x), **ratio_kwargs)
        # Compute posterior
        Lambda_b_pdf = observable.posterior('Lambda_b', logprior=Lb_logprior, Lambda_b=Lambda_b_array)
        label = 'N{n}LO'.format(n=max_ind-1)
        ax.plot(Lambda_b_array, Lambda_b_pdf, label=label, c=Lb_colors[i])

        # Add shading using highest probability density intervals
        for p in [0.68, 0.95]:
            hpd_lower, hpd_upper = bq.HPD_pdf(pdf=Lambda_b_pdf, alpha=p, x=Lambda_b_array,
                                              opt_kwargs={'disp': False})
            ax.fill_between(x=Lambda_b_array, y1=Lambda_b_pdf,
                            where=((hpd_lower < Lambda_b_array) & (Lambda_b_array < hpd_upper)),
                            alpha=0.5, color=Lb_colors[i])
        # Format plots
        ax.set_yticks([])
        ax.set_ylim([0, None])
        ax.grid()
        ax.legend()
    axs[0, j].set_title(r'pr($\Lambda_b | c$): {}'.format(names[j]))
    axs[1, j].set_xlabel(r'$\Lambda_b$ [MeV]')
    axs[0, j].set_xlim([0, 1200])
    axs[0, j].set_xticks([0, 300, 600, 900, 1200])


Compare the above figure with Fig. 22 of Melendez et al.:

Conclusion

We have regenerated important figures from Furnstahl et al. and Melendez et al. This code is significantly more condensed, efficient, and reusable than the original. We believe that the ease with which results can be obtained with this package can greatly expand its domain of applicability.


In [ ]: