Adaptive Filters and Neural Units - Bare Minimum of Informations

Introduction

An adaptive filter is a system with a linear filter that has a transfer function controlled by variable parameters and a means to adjust those parameters according to an optimization algorithm.

The output of adaptive filter $y(k)$ is produced as folows

$y(k) = w_1 x_{1}(k) + ... + w_n x_{n}(k)$,

or in a vector form

$y(k) = \textbf{x}^T(k) \textbf{w}(k)$,

where $k$ is discrete time index, $(.)^{T}$ denotes the transposition, $y(k)$ is filtered signal, $\textbf{w}$ is vector of filter adaptive parameters and $\textbf{x}$ is input vector (for a filter of size $n$) as follows

$\textbf{x}(k) = [x_1(k), ..., x_n(k)]$.

Initial Values of Adapative Parameters

The values of adaptive weights (adaptive parameters) $\textbf{w}$ are usually set to all zeros, or alternatively to random numbers (normal distribution, zero mean value).

Augmentation with Bias

The input vector $\textbf{x}(k)$ can be extended with bias. Common used value for bias is $1$. The augmented input vector can look as follows

$\textbf{x}(k) = [1, x_1(k), ..., x_n(k)]$.

This change also means that vector of adaptive parameters have to change - it should be extended by one item to match the size of input vector. The bias do not change during the adaptation process.

In some fields (computational intelligence etc.) this augmented version seems to be prefered. The adaptive filter with this modification is often called Neural Unit.

Terminology in different fields

  • Adaptive Filter = Neural Unit (NU are most likely adaptive filters with random initial adaptive parameters and with bias)

  • desired value = target

  • least-mean-squares = stochastic gradiend descent

Learning Algorithms

The most popular learning algorithms for adaptive filters are:

  • Least-mean-squares (LMS) (this is implementation of Stochastic Gradiend Descent), commonly is also noted just as Gradiend Descent adaptation

  • Normalized Least-mean-squares (NLMS) - this is modification of LMS

  • Recursive Least Squares (RLS)

It is hard to say, what algorithms produce the best results. The performance is strongly task dependand.


In [ ]: