``````

In [1]:

import model, modelfast, itertools, random

``````
``````

In [2]:

``````
``````

In [3]:

``````

## Switching Data

We make an action/observation sequence of length 500000/2. .01% of the time the world will change between 2 POMDPs generating the data.

The underlying POMDPs are pretty stupid and are built in another file.

``````

In [5]:

import pomdp

``````
``````

In [112]:

seqswitching = list(itertools.islice(pomdp.Switcher(0.0001, pomdp.Room(), pomdp.Hallway()), 500000))

``````

## Models

We make two models. Both are factored models so the predictors don't get confused between predicting actions and observations. We only compare about the second factor (the one predicting observations given the history).

The first is a CTW model.

The second is a FMN model.

We then run the models on our switching data.

``````

In [154]:

MCTW = model.Factored((model.Dumb(), modelfast.CTW_KT(8)))

``````
``````

In [155]:

MFMN = model.Factored((model.Dumb(), model.FMN(lambda: modelfast.CTW_KT(8))))

``````
``````

In [156]:

MCTW.update_seq(seqswitching, [])

``````
``````

Out[156]:

-4045.416050022657

``````
``````

In [157]:

MFMN.update_seq(seqswitching, [])

``````
``````

Out[157]:

-701.693517090372

``````

## Analyzing the Models

FMN extracts "useful" models from the history. One way to look at these extracted models is to compare the predictive ability on streams of data. In particular, we will examine the models by looking at their predictive ability on streams from the two underlying POMDP models.

``````

In [158]:

MFMN.factors[1].models

``````
``````

Out[158]:

<model.LogStore at 0x113e421d0>

``````
``````

In [159]:

B = MFMN.factors[1].model_factory()

``````
``````

In [160]:

seqhallway = list(itertools.islice(pomdp.Hallway(), 100000))

``````
``````

In [161]:

model_results = [ model.Factored((model.Dumb(), m)).log_predict_seq(seqhallway, []) for m in B.models ]

``````
``````

In [162]:

model_baseline = model.Factored((model.Dumb(), MCTW.factors[1])).log_predict_seq(seqhallway, [])

``````
``````

In [163]:

bar(range(len(B.models)), model_results);
plot((0, len(B.models)), (model_baseline,)*2, 'r');

``````
``````

``````
``````

In [164]:

max(model_results), model_baseline

``````
``````

Out[164]:

(-10.513835471291582, -2169.112089504043)

``````
``````

In [165]:

seqroom = list(itertools.islice(pomdp.Room(), 100000))

``````
``````

In [166]:

model_results = [ model.Factored((model.Dumb(), m)).log_predict_seq(seqroom, []) for m in B.models ]

``````
``````

In [167]:

model_baseline = model.Factored((model.Dumb(), MCTW.factors[1])).log_predict_seq(seqroom, [])

``````
``````

In [168]:

bar(range(len(B.models)), numpy.array(model_results));
plot((0, len(B.models)), (model_baseline,)*2, 'r');

``````
``````

``````
``````

In [169]:

max(model_results), model_baseline

``````
``````

Out[169]:

(-397.24810705354076, -707.7020135167986)

``````

The first thing to notice in the two graphs is that the models are essentially specialized to perform well on one POMDP or the other. The one unspecialized model (Model 6) is the empty CTW model that is the base model for FMN.

The second thing to notice is that the best model from the set for each underlying POMDP is better than the CTW model that tries to predict over both POMDPs.

``````

In [ ]:

``````