This notebook is part of pybroom.
This notebook demonstrate using pybroom when performing Maximum-Likelihood fitting (scalar minimization as opposed to curve fitting) of a set of datasets with lmfit.minimize. We will show that pybroom greatly simplifies comparing, filtering and plotting fit results from multiple datasets. For an example using curve fitting see pybroom-example-multi-datasets.
In [ ]:
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.pylab import normpdf
import seaborn as sns
from lmfit import Model
import lmfit
print('lmfit: %s' % lmfit.__version__)
In [ ]:
sns.set_style('whitegrid')
In [ ]:
import pybroom as br
In [ ]:
N = 20 # number of datasets
n = 1000 # number of sample in each dataset
np.random.seed(1)
d1 = np.random.randn(20, int(0.6*n))*0.5 - 2
d2 = np.random.randn(20, int(0.4*n))*1.5 + 2
d = np.hstack((d1, d2))
In [ ]:
ds = pd.DataFrame(data=d, columns=range(d.shape[1])).stack().reset_index()
ds.columns = ['dataset', 'sample', 'data']
ds.head()
In [ ]:
kws = dict(bins = np.arange(-5, 5.1, 0.1), histtype='step',
lw=2, color='k', alpha=0.1)
for i in range(N):
ds.loc[ds.dataset == i, :].data.plot.hist(**kws)
Here, we use a Gaussian mixture distribution for fitting the data.
We fit the data using the Maximum-Likelihood method, i.e. we minimize the (negative) log-likelihood function:
In [ ]:
# Model PDF to be maximized
def model_pdf(x, a2, mu1, mu2, sig1, sig2):
a1 = 1 - a2
return (a1 * normpdf(x, mu1, sig1) +
a2 * normpdf(x, mu2, sig2))
# Function to be minimized by lmfit
def log_likelihood_lmfit(params, x):
pnames = ('a2', 'mu1', 'mu2', 'sig1', 'sig2')
kws = {n: params[n] for n in pnames}
return -np.log(model_pdf(x, **kws)).sum()
We define the parameters and "fit" the $N$ datasets by minimizing the (scalar) function log_likelihood_lmfit
:
In [ ]:
params = lmfit.Parameters()
params.add('a2', 0.5, min=0, max=1)
params.add('mu1', -1, min=-5, max=5)
params.add('mu2', 1, min=-5, max=5)
params.add('sig1', 1, min=1e-6)
params.add('sig2', 1, min=1e-6)
params.add('ax', expr='a2') # just a test for a derived parameter
Results = [lmfit.minimize(log_likelihood_lmfit, params, args=(di,),
nan_policy='omit', method='least_squares')
for di in d]
Fit results can be inspected with
lmfit.fit_report()
or params.pretty_print()
:
In [ ]:
print(lmfit.fit_report(Results[0]))
print()
Results[0].params.pretty_print()
This is good for peeking at the results. However, extracting these data from lmfit objects is quite a chore and requires good knowledge of lmfit objects structure.
pybroom helps in this task: it extracts data from fit results and returns familiar pandas DataFrame (in tidy format). Thanks to the tidy format these data can be much more easily manipulated, filtered and plotted.
In [ ]:
dg = br.glance(Results)
dg.drop('message', 1).head()
In [ ]:
dt = br.tidy(Results, var_names='dataset')
dt.query('dataset == 0')
Note that while glance returns one row per fit result, the tidy function return one row per fitted parameter.
We can query the value of one parameter (peak position) across the multiple datasets:
In [ ]:
dt.query('name == "mu1"').head()
By computing the standard deviation of the peak positions:
In [ ]:
dt.query('name == "mu1"')['value'].std()
In [ ]:
dt.query('name == "mu2"')['value'].std()
we see that the estimation of mu1
as less error than the estimation
of mu2
.
This difference can be also observed in the histogram of the fitted values:
In [ ]:
dt.query('name == "mu1"')['value'].hist()
dt.query('name == "mu2"')['value'].hist(ax=plt.gca());
We can also use pybroom's tidy_to_dict and dict_to_tidy functions to convert a set of fitted parameters to a dict (and vice-versa):
In [ ]:
kwd_params = br.tidy_to_dict(dt.loc[dt['dataset'] == 0])
kwd_params
In [ ]:
br.dict_to_tidy(kwd_params)
This conversion is useful to call a python functions passing argument values from a tidy DataFrame.
For example, here we use tidy_to_dict
to easily plot the model distribution:
In [ ]:
bins = np.arange(-5, 5.01, 0.25)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
grid = sns.FacetGrid(ds.query('dataset < 6'), col='dataset', hue='dataset', col_wrap=3)
grid.map(plt.hist, 'data', bins=bins, normed=True);
for i, ax in enumerate(grid.axes):
kw_pars = br.tidy_to_dict(dt.loc[dt.dataset == i], keys_exclude=['ax'])
y = model_pdf(x, **kw_pars)
ax.plot(x, y, lw=2, color='k')
For the sake of the example we also fit the $N$ datasets with a single Gaussian distribution:
In [ ]:
def model_pdf1(x, mu, sig):
return normpdf(x, mu, sig)
def log_likelihood_lmfit1(params, x):
return -np.log(model_pdf1(x, **params.valuesdict())).sum()
In [ ]:
params = lmfit.Parameters()
params.add('mu', 0, min=-5, max=5)
params.add('sig', 1, min=1e-6)
Results1 = [lmfit.minimize(log_likelihood_lmfit1, params, args=(di,),
nan_policy='omit', method='least_squares')
for di in d]
In [ ]:
dg1 = br.glance(Results)
dg1.drop('message', 1).head()
In [ ]:
dt1 = br.tidy(Results1, var_names='dataset')
dt1.query('dataset == 0')
Pybroom augment function
extracts information that is the same size as the input dataset,
for example the array of residuals. In this case, however, we performed a scalar minimization
(the log-likelihood function returns a scalar) and therefore the MinimizerResult
object
does not contain any residual array or other data of the same size as the dataset.
We will do instead a comparison of single and two-peaks distribution using the results
from the tidy
function obtained in the previous section.
We start with the following plot:
In [ ]:
dt['model'] = 'twopeaks'
dt1['model'] = 'onepeak'
dt_tot = pd.concat([dt, dt1], ignore_index=True)
In [ ]:
bins = np.arange(-5, 5.01, 0.25)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
grid = sns.FacetGrid(ds.query('dataset < 6'), col='dataset', hue='dataset', col_wrap=3)
grid.map(plt.hist, 'data', bins=bins, normed=True);
for i, ax in enumerate(grid.axes):
kw_pars = br.tidy_to_dict(dt_tot.loc[(dt_tot.dataset == i) & (dt_tot.model == 'onepeak')])
y1 = model_pdf1(x, **kw_pars)
li1, = ax.plot(x, y1, lw=2, color='k', alpha=0.5)
kw_pars = br.tidy_to_dict(dt_tot.loc[(dt_tot.dataset == i) & (dt_tot.model == 'twopeaks')], keys_exclude=['ax'])
y = model_pdf(x, **kw_pars)
li, = ax.plot(x, y, lw=2, color='k')
grid.add_legend(legend_data=dict(onepeak=li1, twopeaks=li),
label_order=['onepeak', 'twopeaks'], title='model');
The problem is that FacetGrid
only takes one DataFrame as input. In the previous
example we provide the DataFrame of "experimental" data (ds
) and use the .map
method to plot
histograms of the different datasets. The fitted distributions, instead, are
plotted manually in the for loop.
We can invert the approach, and pass to FacetGrid
the DataFrame of fitted parameters (dt_tot
),
while leaving the simple histogram for manual plotting. In this case we need to write an
helper function (_plot
) that knows how to plot a distribution given a set of parameter:
In [ ]:
def _plot(names, values, x, label=None, color=None):
df = pd.concat([names, values], axis=1)
kw_pars = br.tidy_to_dict(df, keys_exclude=['ax'])
func = model_pdf1 if label == 'onepeak' else model_pdf
y = func(x, **kw_pars)
plt.plot(x, y, lw=2, color=color, label=label)
In [ ]:
bins = np.arange(-5, 5.01, 0.25)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
grid = sns.FacetGrid(dt_tot.query('dataset < 6'), col='dataset', hue='model', col_wrap=3)
grid.map(_plot, 'name', 'value', x=x)
grid.add_legend()
for i, ax in enumerate(grid.axes):
ax.hist(ds.query('dataset == %d' % i).data, bins=bins, histtype='stepfilled', normed=True,
color='gray', alpha=0.5);
For an even better (i.e. simpler) example of plots of fit results see pybroom-example-multi-datasets.