Introduction to Signal Processing

Lecture 1

Sivakumar Balasubramanian

Lecturer in Bioengineering

Christian Medical College, Vellore

Introduction to Signal Processing (Lecture 1)

What is the course about?


  • Course on the theory and practice of signal processing techniques.
  • First part of the course will focus on continuous-time signal processing.
  • Second part of the course will focus on digital signal processing.
  • Final part of the course will introduce ideas on time-frequency analysis.

What to expect from the course?


  • A good understanding of the theory of continuous and discrete-time signal processing.
  • Ability to analyze and synthesize analog and digital filters.
  • An intuitive understanding of time-frequency analysis.

Introduction to Signal Processing (Lecture 1)

Pre-requisites


  • Basic understanding of calculus (limits, derivative, integration).
  • Basic understanding of probability theory.
  • Experience in programming (C and Python would be ideal).

Introduction to Signal Processing (Lecture 1)

Course Layout


  • Total score: 100[25 + 15 + 15 + 45]
    • Assignments [6]: 5 x 5 = 25
    • Suprise quizes [4]: 3 x 5 = 15
    • Mid-term exam: 15
    • Final exam: 45

Introduction to Signal Processing (Lecture 1)

Course Content


  • Signals and systems
  • Continuous-time Signals
  • Continuous-time Systems
  • Impulse response and convolution integral
  • Fourier and Laplace transforms
  • Introduction to filtering
  • Analog filters
  • Sampling theorem
  • Discrete-time signals and systems
  • Discrete Fourier transform and its computation
  • Z-transform
  • Digital filter design and implementation
  • Stochastic processes and spectral estimation
  • Time-frequency analysis

Introduction to Signal Processing (Lecture 1)

Reference material:


  • Oppenhein, Alan V., and Ronald W. Schafer. "Discrete-time signal processing." Prentice Hall, New York, 1999.
  • Proakis, John G. Digital signal processing: principles algorithms and applications. Pearson Education India, 2001.
  • Devasahayam, Suresh R. Signals and systems in biomedical engineering: signal processing and physiological systems modeling. Springer, 2012.
  • Haykin, Simon, and Barry Van Veen. Signals and systems. John Wiley & Sons, 2007.
  • [In progress] https://github.com/siva82kb/intro_to_signal_processing/

Introduction to Signal Processing (Lecture 1)

What is signal processing?


  • "Signal processing is an enabling technology that encompasses the fundamental theory, applications, algorithms, and implementations of processing or transferring information contained in many different physical, symbolic, or abstract formats broadly designated as signals and uses mathematical, statistical, computational, heuristic, and/or linguistic representations, formalisms, and techniques for representation, modeling, analysis, synthesis, discovery, recovery, sensing, acquisition, extraction, learning, security, or forensics."[1]
  • An umbrella term to describe a wide variety of things.


[1] Moura, J.M.F. (2009). "What is signal processing?, President’s Message". IEEE Signal Processing Magazine 26 (6). doi:10.1109/MSP.2009.934636

Introduction to Signal Processing (Lecture 1)

What is a signal?

Any physical quantity carrying information that varies with one or more independent variables.

$$ s\left(t\right) = 1.23t^2 - 5.11t +41.5 $$
$$ s\left(x,y\right) = e^{-(x^2 + y^2 + 0.5xy)} $$

Mathematical representation will not be possible (e.g. physiological signals, either because the exact function is not known or is too complicated.)

Can you think of example of 3D and 4D signals?

Introduction to Signal Processing (Lecture 1)

Classification of signals

» Based on the dimensions. e.g. 1D, 2D signals

» Scalar vs. Vector: e.g: gray-scale versus RGB image

$$ I_g(x,y) \in \mathbb{R} \,\,\, \text{and} \,\,\, I_{color} \in \mathbb{R}^3 $$

» Continuous-time vs. Discrete-time: based on values assumed by the independent variable.

$$\begin{cases} x(t) = e^{-0.1t^{2}}, \,\, t \in \mathbb{R} & \text{Continuous-time} \\ x[n] = e^{-0.1n^{2}}, \,\, n \in \mathbb{Z} & \text{Discrete-time} \end{cases} $$


In [16]:
def continuous_discrete_time_signals():
    t = np.arange(-10, 10.01, 0.01)
    n = np.arange(-10, 11, 1.0)
    x_t = np.exp(-0.1 * (t ** 2))  # continuous signal
    x_n = np.exp(-0.1 * (n ** 2))  # discrete signal

    fig = figure(figsize=(17,5))
    plot(t, x_t, label="$e^{-0.1*t^{2}}$")
    stem(n, x_n, label="$e^{-0.1*n^{2}}$", basefmt='.')
    ylim(-0.1, 1.1)
    xticks(fontsize=25)
    yticks(fontsize=25)
    xlabel('Time', fontsize=25)
    legend(prop={'size':30});
    savefig("img/cont_disc.svg", format="svg")

continuous_discrete_time_signals()


» Continuous-valued vs. Discrete-valued: based on values assumed by the dependent variable.

$$ \begin{cases} x(t) \in [a, b] & \text{Continuous-valued} \\ x(t) \in \{a_1, a_2, \cdots\} & \text{Discrete-valued} \\ \end{cases} $$

In [18]:
def continuous_discrete_valued_signals():
    t = np.arange(-10, 10.01, 0.01)
    n_steps = 10.
    x_c = np.exp(-0.1 * (t ** 2))
    x_d = (1/n_steps) * np.round(n_steps * x_c)

    fig = figure(figsize=(17,5))
    plot(t, x_c, label="Continuous-valued")
    plot(t, x_d, label="Discrete-valued")
    ylim(-0.1, 1.1)
    xticks(fontsize=25)
    yticks(fontsize=25)
    xlabel('Time', fontsize=25)
    legend(prop={'size':25});
    savefig("img/cont_disc_val.svg", format="svg")

continuous_discrete_valued_signals()


Last two classifications can be combined to have four possible combinations of signals:

  • Continuous-time continuous-valued signals
  • Continuous-time discrete-valued signals
  • Discrete-time continuous-valued signals
  • Discrete-time discrete-valued signals

In [21]:
def continuous_discrete_combos():
    t = np.arange(-10, 10.01, 0.01)
    n = np.arange(-10, 11, 0.5)
    n_steps = 5.

    # continuous-time continuous-valued signal
    x_t_c = np.exp(-0.1 * (t ** 2))
    # continuous-time discrete-valued signal
    x_t_d = (1/n_steps) * np.round(n_steps * x_t_c)
    # discrete-time continuous-valued signal
    x_n_c = np.exp(-0.1 * (n ** 2))
    # discrete-time discrete-valued signal
    x_n_d = (1/n_steps) * np.round(n_steps * x_n_c)

    figure(figsize=(17,8))
    subplot2grid((2,2), (0,0), rowspan=1, colspan=1)
    plot(t, x_t_c,)
    ylim(-0.1, 1.1)
    xticks(fontsize=25)
    yticks(fontsize=25)
    title("Continuous-time Continuous-valued", fontsize=25)

    subplot2grid((2,2), (0,1), rowspan=1, colspan=1)
    plot(t, x_t_d,)
    ylim(-0.1, 1.1)
    xticks(fontsize=25)
    yticks(fontsize=25)
    title("Continuous-time Discrete-valued", fontsize=25)

    subplot2grid((2,2), (1,0), rowspan=1, colspan=1)
    stem(n, x_n_c, basefmt='.')
    ylim(-0.1, 1.1)
    xlim(-10, 10)
    xticks(fontsize=25)
    yticks(fontsize=25)
    title("Discrete-time Continuous-valued", fontsize=25)

    subplot2grid((2,2), (1,1), rowspan=1, colspan=1)
    stem(n, x_n_d, basefmt='.')
    ylim(-0.1, 1.1)
    xlim(-10, 10)
    xticks(fontsize=25)
    yticks(fontsize=25)
    title("Discrete-time Discrete-valued", fontsize=25);
    
    tight_layout();
    
    savefig("img/signal_types.svg", format="svg");


continuous_discrete_combos()


EMG recorded from a linear electrode array.

» Deterministic vs. Stochastic: e.g. EMG is an example of a stochastic signal.


In [30]:
def deterministic_stochastic():
    t = np.arange(0., 10., 0.005)
    x = np.exp(-0.5 * t) * np.sin(2 * np.pi * 2 * t)
    y1 = np.random.normal(0, 1., size=len(t))
    y2 = np.random.uniform(0, 1., size=len(t))

    figure(figsize=(17, 10))

    # deterministic signal
    subplot2grid((3,3), (0,0), rowspan=1, colspan=3)
    plot(t, x, label="$e^{-0.5t}\sin 4\pi t$")
    title('Deterministic signal', fontsize=25)
    xticks(fontsize=25)
    yticks(fontsize=25)
    legend(prop={'size':25});

    # stochastic signal - normal distribution
    subplot2grid((3,3), (1,0), rowspan=1, colspan=2)
    plot(t, y1, label="Normal distribution")
    ylim(-4, 6)
    title('Stochastic signal (Normal)', fontsize=25)
    xticks(fontsize=25)
    yticks(fontsize=25)
    legend(prop={'size':25});

    # histogram
    subplot2grid((3,3), (1,2), rowspan=1, colspan=1)
    hist(y1)
    xlim(-6, 6)
    title("Histogram", fontsize=25)
    xticks(fontsize=25)
    yticks([0, 200, 400, 600], fontsize=25)
    legend(prop={'size':25});

    # stochastic signal - normal distribution
    subplot2grid((3,3), (2,0), rowspan=1, colspan=2)
    plot(t, y2, label="Normal distribution")
    ylim(-0.3, 1.5)
    title('Stochastic signal (Uniform)', fontsize=25)
    xticks(fontsize=25)
    yticks(fontsize=25)
    legend(prop={'size':25});

    # histogram
    subplot2grid((3,3), (2,2), rowspan=1, colspan=1)
    hist(y2)
    xlim(-0.2, 1.2)
    title("Histogram", fontsize=25)
    xticks(fontsize=25)
    yticks([0, 100, 200], fontsize=25)
    legend(prop={'size':25})
    
    tight_layout()
    
    savefig("img/det_stoch.svg", format="svg");

deterministic_stochastic()


» Even vs. Odd: based on symmetry about the $t=0$ axis.

$$ \begin{cases} x(t) = x(-t), & \text{Even signal} \\ x(t) = -x(-t), & \text{Odd signal} \\ \end{cases} $$

Can there be signals that are neither even nor odd?

Theorem: Any arbitrary function can be represented as a sum of an odd and even function.
$$ x(t) = x_{even}(t) + x_{odd}(t) $$
where,
$ x_{even}(t) = \frac{x(t) + x(-t)}{2} $
and
$ x_{odd}(t) = \frac{x(t) - x(-t)}{2} $
.

In [35]:
def even_odd_decomposition():
    t = np.arange(-5, 5, 0.01)
    x = (0.5 * np.exp(-(t-2.1)**2) * np.cos(2*np.pi*t) + 
         np.exp(-t**2) * np.sin(2*np.pi*3*t))

    figure(figsize=(17,4))
    # Original function
    plot(t, x, label="$x(t)$") 
    # Even component
    plot(t, 0.5 * (x + x[::-1]) - 2, label="$x_{even}(t)$")
    # Odd component
    plot(t, 0.5 * (x - x[::-1]) + 2, label="$x_{odd}(t)$")
    xlim(-5, 8)
    title('Decomposition of a signal into even and odd components', fontsize=25)
    xlabel('Time', fontsize=25)
    xticks(fontsize=25)
    yticks([])
    legend(prop={'size':25})
    savefig("img/even_odd.svg", format="svg");


even_odd_decomposition()


» Periodic vs. Non-periodic: a signal is periodic, if and only if

$$ x(t) = x(t + T), \,\, \forall t, \,\,\, T \text{ is the fundamental period.}$$

» Energy vs. Power: indicates if a signal is short-lived.

$$ E = \int_{-\infty}^{\infty}\left|x(t)\right|^{2}dt \,\,\,\,\,\,\,\,\,\, P = \lim_{T\to\infty}\frac{1}{T}\int_{-T/2}^{T/2}\left|x(t)\right|^{2}dt $$

A signal is an energy signal, if

$$ 0 < E < \infty $$

and a signal is a power signal, if

$$ 0 < P < \infty $$

Introduction to Signal Processing (Lecture 1)

What is a system?

A system is any physical device or algorithm that performs some operation on a signal to transform it into another signal.

Introduction to Signal Processing (Lecture 1)

Classification of systems

Based on the properties of a system:

» Linearity: $\implies$ scaling and superposition

Lets assume,

$$ f: x_i(t) \mapsto y_i(t) $$

The system is linear, if and only if,

$$ f: \sum_{i}a_ix_i(t) \mapsto \sum_{i}a_iy_i(t) $$

Which of the following systems are linear?

(a) $y(t) = k_1x(t) + k_2x(t-2)$
(b) $y(t) = \int_{t-T}^{t}x(\tau)d\tau$
(c) $y(t) = 0.5x(t) + 1.5$

» Memory: a system whose output depends on past, present or future values of input is a system with memory, else the system is memoryless

Memoryless system:

$$ y(t) = 0.5x(t) $$

System with memory:

$$ y(t) = \int_{t-0.5}^{t}x(\tau)d\tau$$

In [ ]:
def memory():
    dt = 0.01
    N = np.round(0.5/dt)
    t = np.arange(-1.0, 5.0, dt)
    x = 1.0 * np.array([t >= 1.0, t < 3.0]).all(0)

    # memoryless system
    y1 = 0.5 * x
    # system with memory.
    y2 = np.zeros(len(x))
    for i in xrange(len(y2)):
        y2[i] = np.sum(x[max(0, i-N):i]) * dt

    figure(figsize=(17,4))
    plot(t, x, lw=2, label="$x(t)$")
    plot(t, y1, lw=2, label="$0.5x(t)$")
    plot(t, y2, lw=2, label="$\int_{t-0.5}^{t}x(p)dp$")
    xlim(-1, 5)
    ylim(-0.1, 1.1)
    xticks(fontsize=25)
    yticks(fontsize=25)
    xlabel('Time', fontsize=25)
    legend(prop={'size':25})
    
    savefig("img/memory.svg", format="svg");

memory()

» Causality: a system whose output depends on past and present values of input.

» Time invariance: system remains the same with time.

If a system is time invariant, then if

$$ x(t) \mapsto y(t) \implies x(t-\tau) \mapsto y(t-\tau)$$

» Stability: bounded input produces bounded output

$$ \left|x(t)\right| < M_x < \infty \mapsto \left|y(t)\right| < M_y < \infty $$

» Invertibility: input can be recovered from the output


In [ ]:
%pylab inline
import seaborn as sb

In [13]:
# Functions to generate plots for the different sections.
def signal_examples():
    t = np.arange(-5, 5, 0.01)
    s1 = 1.23 * (t ** 2) - 5.11 * t + 41.5
    x, y = np.arange(-2.0, 2.0, 0.01), np.arange(-2.0, 2.0, 0.01)

    fig = figure(figsize=(17, 7))
    subplot(121)
    plot(t, s1)
    xlabel('Time', fontsize=20)
    xlim(-5, 5)
    xticks(fontsize=25)
    yticks(fontsize=25)
    title("$1.23t^2 - 5.11t + 41.5$", fontsize=30)

    subplot(122)
    s = np.array([[np.exp(-(_x**2 + _y**2 + 0.5*_x*_y)) for _y in y] for _x in x])
    X, Y = meshgrid(x, y)
    contourf(X, Y, s)
    xticks(fontsize=25)
    yticks(fontsize=25)
    title("$s(x, y) = e^{-(x^2 + y^2 + 0.5xy)}$", fontsize=30);
    savefig("img/signals.svg", format="svg")

def memory():
    dt = 0.01
    N = np.round(0.5/dt)
    t = np.arange(-1.0, 5.0, dt)
    x = 1.0 * np.array([t >= 1.0, t < 3.0]).all(0)

    # memoryless system
    y1 = 0.5 * x
    # system with memory.
    y2 = np.zeros(len(x))
    for i in xrange(len(y2)):
        y2[i] = np.sum(x[max(0, i-N):i]) * dt

    figure(figsize=(17,4))
    plot(t, x, lw=2, label="$x(t)$")
    plot(t, y1, lw=2, label="$0.5x(t)$")
    plot(t, y2, lw=2, label="$\int_{t-0.5}^{t}x(p)dp$")
    xlim(-1, 5)
    ylim(-0.1, 1.1)
    xlabel('Time', fontsize=15)
    legend(prop={'size':20});

In [1]:
from IPython.core.display import HTML
def css_styling():
    styles = open("../../styles/custom_aero.css", "r").read()
    return HTML(styles)

css_styling()


Out[1]: