In [1]:
from __future__ import print_function
import numpy as np
import matplotlib.pylab as plt
import padasip as pa
%matplotlib inline
plt.style.use('ggplot') # nicer plots
np.random.seed(52102) # always use the same random seed to make results comparable
%config InlineBackend.print_figure_kwargs = {}
In [2]:
u = np.arange(0,10,1)
u
Out[2]:
If the series is only an input of the adaptive filter, the input matrix can be a so called Delay Embedded Matrix (DEM). The construction of the input matrix can be simply created as
In [3]:
x = pa.input_from_history(u, 4)
x
Out[3]:
In this matrix, every row represents an input vector for one sample. If you want to extend the input matrix with a bias, then
In [4]:
n = 4
x = pa.input_from_history(u, n, bias=True)
x
Out[4]:
The length of the input matrix (number of rows) is $N - n + 1$, where $N$ is the length of the time series, $n$ is the desired number of historical values in one row of the input matrix, so:
In [5]:
N = len(u)
n = 4
N - n + 1
Out[5]:
is equal to
In [6]:
len(x)
Out[6]:
Consider a sin like natured signal $\textbf{u}$ contaminated with noise $\textbf{v}$. This results in a measured signal $\textbf{d}$. The goal is prediction/reconstruction of signal $\textbf{u}$ only from signal $\textbf{d}$. For the evaluation of noise cancelation performance we will use the MSE (mean-squared-error).
In [7]:
# signals creation: u, v, d
N = 5000
n = 10
u = np.sin(np.arange(0, N/10., N/50000.))
v = np.random.normal(0, 1, N)
d = u + v
# filtering
x = pa.input_from_history(d, n)[:-1]
d = d[n:]
u = u[n:]
f = pa.filters.FilterRLS(mu=0.9, n=n)
y, e, w = f.run(d, x)
# error estimation
MSE_d = np.dot(u-d, u-d) / float(len(u))
MSE_y = np.dot(u-y, u-y) / float(len(u))
# results
plt.figure(figsize=(12.5,6))
plt.plot(u, "r:", linewidth=4, label="original")
plt.plot(d, "b", label="noisy, MSE: {}".format(MSE_d))
plt.plot(y, "g", label="filtered, MSE: {}".format(MSE_y))
plt.xlim(N-100,N)
plt.legend()
plt.tight_layout()
plt.show()
According to this result, the adaptive filter can decrease the MSE caused by noise.
Consider a system output contaminated with noise $\textbf{v}$
$d(k) = 2 x_1(k) + 0.1 x_2(k) - 4 x_3(k) + 0.5 x_4(k) + v(k)$
where $x_i(k)$ is $i$-th channel (column) in $k$ sample (row) of the input matrix $\textbf{X}$ (random numbers in this example) - in this case, the input of the system is the same as the input of the filter. The noise $\textbf{v}$ is Gaussian white noise with zero mean and standard deviation of 0.1. Because the scale of the data is small, we can use a high learning rate $\mu=0.5$
In [8]:
# creation of x and d
N = 700
x = np.random.random((N, 4))
v = np.random.normal(0, 1, N) * 0.1
d = 2*x[:,0] + 0.1*x[:,1] - 4*x[:,2] + 0.5*x[:,3] + v
# identification
f = pa.filters.FilterNLMS(mu=0.5, n=4)
y, e, w = f.run(d, x)
# show results
plt.figure(figsize=(12.5,9))
plt.subplot(211);plt.title("Adaptation");plt.xlabel("Number of iteration [-]")
plt.plot(d,"b", label="d - target")
plt.plot(y,"g", label="y - output")
plt.xlim(0, N)
plt.legend()
plt.subplot(212); plt.title("Filter error"); plt.xlabel("Number of iteration [-]")
plt.plot(pa.misc.logSE(e),"r", label="Squared error [dB]");plt.legend()
plt.xlim(0, N)
plt.tight_layout()
plt.show()
print("And the resulting coefficients are: {}".format(w[-1]))
In [9]:
# creation of u, x and d
N = 100
u = np.random.random(N)
d = np.zeros(N)
for k in range(3, N):
d[k] = 2*u[k] + 0.1*u[k-1] - 4*u[k-2] + 0.5*u[k-3]
d = d[3:]
# identification
x = pa.input_from_history(u, 4)
f = pa.filters.FilterRLS(mu=0.1, n=4)
y, e, w = f.run(d, x)
# show results
plt.figure(figsize=(12.5,9))
plt.subplot(211);plt.title("Adaptation");plt.xlabel("Number of iteration [-]")
plt.plot(d,"b", label="d - target")
plt.plot(y,"g", label="y - output");plt.legend()
plt.subplot(212);plt.title("Filter error");plt.xlabel("Number of iteration [-]")
plt.plot(pa.misc.logSE(e),"r", label="Squared error [db]");plt.legend()
plt.tight_layout()
plt.show()
From the above figure it is possible to see, that the prediction error decreases with new samples. At the end, the error is really low.