In [1]:
using Kalman
This is the example given in Welch and Bishop, 2006.
We want to estimate a constant voltage (set to $x=-0.37727\ \mathrm{V}$ in this example), but our measurements are corrupted by $0.1\ \mathrm{V}$ white, additive measurement noise. The basic Kalman Filter signal and measurement models are
\begin{align} \mathbf{x}_{k+1} &= \mathbf{Ax}_k+\mathbf{Gw}_k \\ \mathbf{z}_{k+1} &= \mathbf{Hx}_k+v_k \end{align}since our signal doesn't change from step to step, and we are directly measuring the voltage, $\mathbf{A} = \mathbf{H} = 1$. Our system noise directly corrupts the signal, so $\mathbf{G}=1$ as well.
These are scalars, but for the Kalman.jl interface, we need these to be two dimensional matrices (thus the transposes below).
In [2]:
a = [1]'
h = [1]'
g = [1]'
Out[2]:
$\mathbf{w}_k$ is normally-distributed with a scalar variance $\mathbf{Q} = Q$ while $\mathbf{v}_k$ is normally-distributed with a scalar variance $\mathbf{R}=R$. We set $R=0.1^2\ \mathrm{V} = 0.01\ \mathrm{V}$. It helps if our system noise is non-zero (which is realistic in any case), so we'll set $Q=10^{-10}\ \mathrm{V}$
In [3]:
r = [0.01]'
q = [1e-10]'
Out[3]:
All we need now is our initial conditions. We'll estimate our initial voltage as $\hat{x}_0 = 0.0\ \mathrm{V}$ and our initial covariance matrix as $\hat{P}_0 = 1$. For Kalman.jl, the state is contained within a State
type which holds a vector for the system state and a matrix for the state covariance matrix.
In [4]:
x0 = State([0.0],[1.0]')
Out[4]:
We now set up our Kalman filter. We'll use the BasicKalmanFilter
type which holds a State
, a LinearModel
representing the signal model, and a LinearObservationModel
representing the observation model.
In [5]:
f = LinearModel(a,g,q)
z = LinearObservationModel(h,r)
kf0 = BasicKalmanFilter(x0,f,z)
kf = kf0
Out[5]:
We'll generate a random set of 50 measurements of our voltage. Measurements are fed to the Kalman filter as an Observation
type.
In [6]:
x_true = -0.37727 # We need to know our true voltage to generate the measurements
y = map(i->Observation([i]),x_true+sqrt(r[1])*randn(50))
Out[6]:
Now we run the Kalman filter by iterating over a "predict" step and an "update" step.
In [7]:
x = zeros(Float64,50) # Store our state estimates for analysis later
p = ones(Float64,50) # Store our state covariance for analysis later
for i = 1:49
kf2 = predict(kf)
kf = update(kf2,y[i])
x[i+1] = kf.x.x[1]
p[i+1] = kf.x.p[1]
end
Let's plot up some results.
Here is our state estimate over the 50 measurements which we made.
In [8]:
using Winston
plot(x_true*ones(50),"k")
oplot(map(i->i.y[1],y),"r.")
oplot(x,"b")
xlim(1,50)
xlabel("Time step")
ylabel("Voltage estimate (V)")
Out[8]:
That converges to $x$ very nicely!
Now let's see how the state covariance changes as we add more measurements.
In [9]:
plot(p,"g")
xlim(2,50)
xlabel("Time step")
ylabel("Variance (V^2)")
Out[9]:
The variance of our estimate has decreased from our guess of $1\ \mathrm{V^2}$ to $0.0002\ \mathrm{V^2}$