In this example, we will replicate the latent class example model from Biogeme.
In [1]:
import larch
import pandas
from larch.roles import P,X
The swissmetro dataset used in this example is conveniently bundled with Larch,
accessible using the data_warehouse
module. We'll load this file using
the pandas read_csv
command.
In [2]:
from larch import data_warehouse
raw = pandas.read_csv(larch.data_warehouse.example_file('swissmetro.csv.gz'))
We can inspect a few rows of data to see what we have using the head
method.
In [3]:
raw.head()
Out[3]:
The Biogeme code includes a variety of commands to manipulate the data
and create new variables. Because Larch sits on top of pandas, a reasonable
method to create new variables is to just create new columns in the
source pandas.DataFrame
in the usual manner for any DataFrame
.
In [4]:
raw['SM_COST'] = raw['SM_CO'] * (raw["GA"]==0)
You can also use the eval
method of pandas DataFrames.
This method takes an expression as a string
and evaluates it within a namespace that has already loaded the
column names as variables.
In [5]:
raw['TRAIN_COST'] = raw.eval("TRAIN_CO * (GA == 0)")
This can allow for writing data expressions more succinctly, as long as all your variable names are strings that can also be the names of variables in Python. If this isn't the case (e.g., if any variable names have spaces in the name) you'll be better off if you stay away from this feature.
We can mix and match between these two method to create new columns in any DataFrame as needed.
In [6]:
raw['TRAIN_COST_SCALED'] = raw['TRAIN_COST'] / 100
raw['TRAIN_TT_SCALED'] = raw['TRAIN_TT'] / 100
raw['SM_COST_SCALED'] = raw.eval('SM_COST / 100')
raw['SM_TT_SCALED'] = raw['SM_TT'] / 100
raw['CAR_CO_SCALED'] = raw['CAR_CO'] / 100
raw['CAR_TT_SCALED'] = raw['CAR_TT'] / 100
In [7]:
raw['CAR_AV_SP'] = raw.eval("CAR_AV * (SP!=0)")
raw['TRAIN_AV_SP'] = raw.eval("TRAIN_AV * (SP!=0)")
Removing some observations can also be done directly using pandas. Here we identify a subset of observations that we want to keep.
In [8]:
keep = raw.eval("PURPOSE in (1,3) and CHOICE != 0")
You may note that we don't assign this value to a column within the
raw
DataFrame. This is perfectly acceptable, as the output from
the eval
method is just a normal pandas.Series, like any other
single column output you might expect to get from a pandas method.
When you've created the data you need, you can pass the dataframe to
the larch.DataFrames constructor. Since the swissmetro data is in
idco
format, we'll need to explicitly identify the alternative
codes as well.
In [9]:
dfs = larch.DataFrames(raw[keep], alt_codes=[1,2,3])
The info method of the DataFrames object gives a short summary of the contents.
In [10]:
dfs.info()
A longer summary is available by setting verbose to True.
In [11]:
dfs.info(verbose=True)
You may have noticed that the info summary notes that this data is "not computation-ready". That's because some of the data columns are stored as integers, which can be observed by inspecting the info on the data_co dataframe.
In [12]:
dfs.data_co.info()
When computations are run, we'll need all the data to be in float format, but Larch knows this and will handle it for you later.
In [13]:
m1 = larch.Model(dataservice=dfs)
m1.availability_co_vars = {
1: "TRAIN_AV_SP",
2: "SM_AV",
3: "CAR_AV_SP",
}
m1.choice_co_code = 'CHOICE'
m1.utility_co[1] = P("ASC_TRAIN") + X("TRAIN_COST_SCALED") * P("B_COST")
m1.utility_co[2] = X("SM_COST_SCALED") * P("B_COST")
m1.utility_co[3] = P("ASC_CAR") + X("CAR_CO_SCALED") * P("B_COST")
In [14]:
m2 = larch.Model(dataservice=dfs)
m2.availability_co_vars = {
1: "TRAIN_AV_SP",
2: "SM_AV",
3: "CAR_AV_SP",
}
m2.choice_co_code = 'CHOICE'
m2.utility_co[1] = P("ASC_TRAIN") + X("TRAIN_TT_SCALED") * P("B_TIME") + X("TRAIN_COST_SCALED") * P("B_COST")
m2.utility_co[2] = X("SM_TT_SCALED") * P("B_TIME") + X("SM_COST_SCALED") * P("B_COST")
m2.utility_co[3] = P("ASC_CAR") + X("CAR_TT_SCALED") * P("B_TIME") + X("CAR_CO_SCALED") * P("B_COST")
For Larch, the class membership model will be set up as yet another discrete choice model. In this case, the choices are not the ultimate choices, but instead are the latent classes. To remain consistent with the Biogeme example, we'll set up this model with only a single constant that determines class membership. Unlike Biogeme, this class membership will be represented with an MNL model, not a simple direct probability.
In [15]:
mk = larch.Model()
mk.utility_co[2] = P("W_OTHER")
The utility function of the first class isn't written here, which means it will implicitly be set as 0.
Now we're ready to create the latent class model itself, by assembling the components
we created above. The constructor for the LatentClassModel
takes two arguments,
a class membership model, and a dictionary of class models, where the keys in the
dictionary correspond to the identifying codes from the utility functions we wrote
for the class membership model.
In [16]:
from larch.model.latentclass import LatentClassModel
m = LatentClassModel(mk, {1:m1, 2:m2})
The we'll load the data needed for our models using the load_data
method.
This step will assemble the data needed, and convert it to floating point
format as required.
In [17]:
m.load_data()
In [18]:
m.dataframes.info(verbose=1)
Only the data actually needed by the models has been converted, which may help keep memory usage down on larger models. You may also note that the loaded dataframes no longer reports that it is "not computational-ready".
To estimate the model, we'll use the maximize_loglike
method. When run
in Jupyter, a live-view report of the parmeters and log likelihood is displayed.
In [19]:
result = m.maximize_loglike()
In [20]:
result
Out[20]:
To complete our analysis, we can compute the log likelihood at "null" parameters.
In [21]:
m.loglike_null()
Out[21]:
And the parameter covariance matrixes.
In [22]:
m.calculate_parameter_covariance()
In [23]:
m.covariance_matrix
Out[23]:
In [24]:
m.robust_covariance_matrix
Out[24]:
And then generate a report of the estimation statistics. Larch includes a Reporter
class
to help you assemble a report containing the relevant output you want.
In [25]:
report = larch.Reporter("Latent Class Example")
Pipe into the report section headers in markdown format (use one hash for top level headings, two hashes for lower levels, etc.)
In [26]:
report << "# Parameter Estimates"
Out[26]:
You can also pipe in dataframes directly, include the pf
parameter frame from the model.
In [27]:
report << m.pf
Out[27]:
And a selection of pre-formatted summary sections.
In [28]:
report << "# Estimation Statistics"
report << m.estimation_statistics()
Out[28]:
In [29]:
report << "# Parameter Covariance"
report << "## Typical Parameter Covariance"
report << m.covariance_matrix
report << "## Robust Parameter Covariance"
report << m.robust_covariance_matrix
Out[29]:
In [30]:
report << "# Utility Functions"
report << "## Class 1"
report << "### Formulae"
report << m1.utility_functions(resolve_parameters=False)
report << "### Final Estimated Values"
report << m1.utility_functions(resolve_parameters=True)
Out[30]:
In [31]:
report << "## Class 2"
report << "### Formulae"
report << m1.utility_functions(resolve_parameters=False)
report << "### Final Estimated Values"
report << m1.utility_functions(resolve_parameters=True)
Out[31]:
In addition to reviewing report sections in a Jupyter notebook, the entire report can be saved to an HTML file.
In [32]:
report.save('latent-class-example-report.html', overwrite=True)
Out[32]: