Introduction to Signal Processing

Version 0.1

Chapter 3: Continuous-time Concepts: Systems


This chapter delves into the concept of systems that was introduced in Chapter 1. The focus here is on continuous-time systems, focusing almost entirely on a class of systems called linear time-variant (LTI) systems.

Operating and transforming signals according to the need of an application is at the core of signal processing. The transformation could be done for various reasons, changing signal amplitude, removing unwanted signal components, decomposing arbitrary signals into familiar signals (like the signals presented in Chapter 2), extracting meaningful information from a signal etc. Systems carry out this act of processing signals. In this chapter we will look some of the basic operations that can be performed on signals that can be used for understanding, analysing and bulding systems to process signals.

3.1 Basic linear operations on signals

Operations on singals are the basic building block of systems. Mathematically, systems can be realized through the interconnection of different operations that results in the overall composite operation carried out by the systems. There are countless number of ways in which one could mathematically operate on a signal.

An operation of a signal can be broadly classified into two types: (a) an operation on the dependent variable, or (b) an operation on the independent variable.

3.1.1 Operations on the dependent varible

These are operation that deal with transforming just the dependent varible of a given singal to transform it into another signal. The following are some of the basic operations that can be performed on the independent variable.

Scaling deals with the multiplication of a signal by a constant scalar value. Consider a signal $x\left(t\right)$ and a scaling factor $c$, then the scaled signal $y\left(t\right)$ is,

$$ y\left(t\right) = cx\left(t\right) $$

When the scaling constant $c$ is dimensionless, then both the input and output signals are of the same type, thus the operating can be called amplification or attenuation. However, when $c$ has some dimension, the it is simple transforming one singal type to another.

Consider a simple voltage amplifier with a gain $g$. Here $g$ is dimensionless and the amplifier system simply boost the amplitude of the input voltage $v_{i}\left(t\right)$ signal to produce an output voltage $v_{o}\left(t\right)$ singal.

$$ v_{o}\left(t\right) = gv_{i}\left(t\right) $$

On the other hand, a resistor $R$ can be seen as a system whose input is the current $i\left(t\right)$ through it, and its output is the voltage $v\left(t\right)$ across it. Here, the input is a current signal (measured in Amperes) and the output is a voltage signal (measured in Volts). Here the resistor (measured in Ohms) transforms the current to a voltage.

$$ v\left(t\right) = Ri\left(t\right) $$

Addition produces a result that is sum of two signals. Consider $x_{1}\left(t\right)$ and $x_{2}\left(t\right)$. The signal $y\left(t\right)$ obtained by the addition of these two signals is,

$$ y\left(t\right) = x_{1}\left(t\right) + x_{2}\left(t\right) $$

Differentiation produces an output signal that is equal to the derivative of the input signal with respect to its independent variable. Consider an input signal $x\left(t\right)$, then the output signal $y\left(t\right)$ is,

$$ y\left(t\right) = \frac{d}{dt}x\left(t\right) $$

Integration produces an output signal that is equal to the area under the given continuous-time signal $x\left(t\right)$. The integration operation can be seen as an accumulation process of the input signal.

$$ y\left(t\right) = \int_{-\infty}^{t}x\left(\tau\right)d\tau $$

Exercises:

(a) Give examples of physical systems that carry out differentiation and integration operations

(b) Verify that the above operations are linear operations.

3.1.2 Operation on the independent variable

Time Scaling streches or compresses a given signal along the time axis. Consider a signal $y\left(t\right)$ obtained by scaling the independent variable by a positive scalar factor $a$ is given by,

$$ y\left(t\right) = x\left(at\right) $$

If $a > 1$, then the time scaling operation is a compression operation, and if $0 < a < 1$, then it is a dialation operation. The effect of time scaling on a signal is demonstrated in the following.


In [1]:
%pylab inline
import seaborn as sb


Populating the interactive namespace from numpy and matplotlib

In [2]:
# Time
t = np.arange(-5, 5, 0.01)

# Unscaled signals.
x_us = np.exp(-t**2)

# Compressed signals.
a = 2.0
x_c = np.exp(-(a*t)**2)

# Dialated signals.
a = 0.6
x_d = np.exp(-(a*t)**2)

figure(figsize=(13,6))
subplot2grid((2,1), (0,0), rowspan=1, colspan=1)
plot(t, x_us, label="$e^{-t^2}$")
plot(t, x_c, label="$e^{-(2t)^{2}}$")
xlim(-5., 5.)
ylim(-0.1, 1.1)
title('Time scaling: Compression', fontsize=15)
legend(prop={'size':20})

subplot2grid((2,1), (1,0), rowspan=1, colspan=1)
plot(t, x_us, label="$e^{-t^2}$")
plot(t, x_d, label="$e^{-(0.6t)^{2}}$")
xlabel('Time', fontsize=15)
xlim(-5., 5.)
ylim(-0.1, 1.1)
title('Time scaling: Dialation', fontsize=15)
legend(prop={'size':20});


Reflection refers to the process of mirroring a singal $x\left(t\right)$ about the origin (i.e. $t = 0$).

$$ y\left(t\right) = x\left(-t\right) $$

In [3]:
def e(t):
    return np.exp(-t) if  t >= -1.0 else 0.

# Time
time = np.arange(-5, 5, 0.01)

# Unscaled signals.
x = [e(t) for t in time]
x_r = [e(-t) for t in time]

figure(figsize=(13,3))
plot(time, x, label="$x(t)$")
plot(time, x_r, label="$x(-t)$")
xlim(-5., 5.)
ylim(-0.1, 3.1)
title('Reflection', fontsize=15)
xlabel('Time', fontsize=15)
legend(prop={'size':20});


Time shifting deals with the translation of a singal along the time axis. Consider a signal $x\left(t\right)$. The time shifted version of this singal by $t_{0}$ is given by,

$$ y\left(t\right) = x\left(t - t_{0}\right) $$

When $t_{0} > 0$, $x\left(t - t_{0}\right)$ shifts $x\left(t\right)$ to the right along the time axis, while if $t_0 < 0$ then $x(t-t_0)$ shifts $x(t)$ to the left.


In [4]:
def e(t):
    return np.exp(-t) if  t >= 0 else 0.

# Time
time = np.arange(-5, 5, 0.01)

# Unscaled signals.
x = [e(t) for t in time]
x_l = [e(t-0.5) for t in time]
x_r = [e(t+3.1) for t in time]

figure(figsize=(13,3))
plot(time, x, label="$x(t)$")
plot(time, x_l, label="$x(t - 0.5)$")
plot(time, x_r, label="$x(t + 3.1)$")
xlim(-5., 5.)
ylim(-0.1, 1.1)
title('Time shifting', fontsize=15)
xlabel('Time', fontsize=15)
legend(prop={'size':20});


Combining operations on the independent variable

The three operations on the independent variable - time scaling, reflection and time shifting - can be combined together to compose arbitrary operations on the independent variable. For example, consider the following operation.

$$ y(t) = x(b - at) $$

When trying to perform this operation on a given signal, it is important to carrying out the different operations in the correct order. In such problems, the following steps must be followed:

  • Step 1: Time shift the orginal signal $x(t)$ by an amoutn $-b$ to get the intermediate signal $v(t)$. $$ v(t) = x(t + b) $$

  • Step 2: Scale the intermedaite signal by the factor $a$, and then perform the refelction operation, if any. $$ y(t) = x(-at + b) $$


In [5]:
def f(t):
    m = 0.5 if t <=0 else -2.0
    return m*t + 1. if  t>= -2.0 and t < 0.5 else 0.

# Intermediate time shift step
def intermediate1(t, b):
    return f(t - b)

# Time
time = np.arange(-7, 7, 0.01)

# Original signal.
x = [f(t) for t in time]
# Modified signald.
x1 = [f(0.5*t + 1.0) for t in time]
x2 = [f(2. - 1.5*t) for t in time]

figure(figsize=(13,3))
plot(time, x, label="$x(t)$")
plot(time, x1, label="$x(0.5t + 1.0)$")
plot(time, x2, label="$x(2. - 1.5t)$")
xlim(-7., 7.)
ylim(-0.05, 1.1)
title('Composite operations on the time axis', fontsize=15)
xlabel('Time', fontsize=15)
gca().xaxis.set_ticks(np.arange(-7, 7, 1.0))
legend(prop={'size':20});


3.2 Linear Time-Invariant Systems

An important class of systems that provide practical and usable approximation of real life systems are systems that are both linear and time-invariant (refer to Chapter 1). This essentially means the following:

Consider a system $H$, such that $y_{i}\left(t\right)$ is the output of the system for an input $x_{i}\left(t\right)$, where $i \in \{1,2, ..., N\}$. THe system $H$ is a linear time-invariant systems iff,

$$ H\bigg\{\sum_{i=1}^{N}a_{i}x_{i}\left(t-T_{i}\right)\bigg\} = \sum_{i=1}^{N}a_{i}y_{i}\left(t-T_{i}\right) $$

where, $a_{i}$ and $T_{i}$ are scalar quantities.

The above statement is simply the properties of linearity and time-shift invariance combined together.

3.3 Time-doman characterization of Continuous-time LTI systems

Given a system, we would like to characterize and understand the beahior of the system, for the purpose of either controlling the system or for designing similar systems for the purpose of processing signals. We will present four different mathematical tools for charactizing the temporal beahviour of an LTI systems:

  1. Impulse response
  2. Differential equations
  3. Block diagram
  4. State space description

3.4 Impulse Response

Impulse response refers to the output of a LTI system when an impulse function $\delta\left(t\right)$ is given as an input. The impulse response completely characterizes an LTI system and gives us all the information we need to characterize the system; knowledge about a LTI system's impulse response will allow one to calculate the output of a system to any arbitrary input.

In order to understand this, let us consider a system $H$ with impulse response $h\left(t\right)$, $$ h\left(t\right) = H\{\delta\left(t\right)\} $$

The impulse response $h\left(t\right)$ is the limit of the sequence of functions that are the outputs of the system $H$ for the sequence of inputs $g_{T}\left(t\right) = \frac{1}{T}g_{0}\left(\frac{t}{T}\right)$, which are the time-compressed and scaled versions of a function $g_{0}\left(t\right)$, such that $\int_{-\infty}^{\infty}g_{T}\left(t\right)dt = 1$.

$$ h\left(t\right) = \lim_{T \to 0}H\big\{g_{T}\left(t\right)\big\} $$

Here, the exact nature of $g_{0}\left(t\right)$ does not matter as long as the area under the signal is unity.

Let $x\left(t\right)$ be an arbitrary input to $H$ whose output we are interested in knowing. An interesting property about any continuous-time signal (or any signal) is that it can be represented as a weighted superposition of time-shifted impulse functions, $$ x\left(t\right) = \int_{-\infty}^{\infty}x\left(\tau\right)\delta\left(t - \tau\right)d\tau $$

The output of the system $H$ for $x\left(t\right)$ is then $$ y\left(t\right) = H\{x\left(t\right)\} = H\bigg\{\int_{-\infty}^{\infty}x\left(\tau\right)\delta\left(t - \tau\right)d\tau\bigg\} $$

In order to evaluate this integral we will make use a limiting argument. First consider, $x_{T}\left(t\right)$, an approximation of $x\left(t\right)$, represented as the weighted super position of time-shifted rectangular pulses $g_{T}\left(\right)$

$$ g_{T}\left(t\right) = \begin{cases} \frac{1}{T} & -\frac{T}{2} \leq t \leq \frac{T}{2} \\ 0 & \mathrm{Othersie} \end{cases} $$$$ x_{T}\left(t\right) = \sum_{k=-\infty}^{\infty}x\left(kT\right)g_c\left(t - kT\right)T \implies x\left(t\right) = \lim_{T \to 0} x_{T}\left(t\right) $$

If we apply $x_{T}\left(t\right)$ to the system $H$, then we get

$$ y_{T}\left(t\right) = H\Big\{x_{T}\left(t\right)\Big\} = H\bigg\{ \sum_{k=-\infty}^{\infty}x\left(kT\right)g_{T}\left(t - kT\right)T \bigg\} $$

Since, $H$ is linear and time-invariant, we have

$$ y_{T}\left(t\right) = \sum_{k=-\infty}^{\infty}x\left(kT\right)H\bigg\{ g_{T}\left(t - kT\right) \bigg\}T $$

Now, applying the limits to the above equation, we have

$$ y\left(t\right) = \lim_{T \to 0}y_{T}\left(t\right) = \lim_{T \to 0} \sum_{k=-\infty}^{\infty}x\left(kT\right)H\bigg\{ g_{T}\left(t - kT\right) \bigg\}T $$$$ \implies y\left(t\right) = \int_{-\infty}^{\infty}x\left(\tau\right)h\left(t - \tau\right)dt $$

The above equation for $y\left(t\right)$ tells us that the output of a LTI system to an arbitrary input is equal to the weighted superposition of the impuse response of the system, where the weighting function is the input signal. This weighted superposition is called the convolution integral.

$$ y\left(t\right) = x\left(t\right) * h\left(t\right) = \int_{-\infty}^{\infty}x\left(\tau\right)h\left(t - \tau\right)dt $$

Here, we say that the output of the system $y\left(t\right)$ is obtained by the convolution of the input $x\left(t\right)$ with the impulse response $h\left(t\right)$.

Impulse response of an RC circuit

Let us now take a look at real example and attempt to calculate the impulse response. The example we will consider will be the RC circuit that we had looked at in Chapter 1. The following is the output voltage across the capacitor when an input voltage of $v _{in}\left(t\right)$ is applied.

$$ v_{out}\left(t\right) = e^{-t/RC} \left(\frac{1}{RC}\int_{0}^{\infty} {e^{\tau / RC}{v_{in}\left(\tau\right)}}d\tau\right) + v_{out}\left(0\right)e^{-t/RC} $$

Consider the following input voltage, $$ v_{in}\left(t\right) = \begin{cases} \frac{1}{T} & 0 \leq t \leq T \\ 0 & \mathrm{Otherwise} \end{cases} $$

and the initial condition $v_{out}\left(0\right) = 0$ $$ v_{out}\left(t\right) = e^{-t/RC} \left(\frac{1}{RC}\int_{0}^{t} {e^{\tau / RC}{\frac{1}{T}}}d\tau\right) $$

We can split the left hand side of the above equation into two pieces, (1) for $t$ between $0$ and $T$, and (2) for $t$ greater than $T$.

(1) $0 \leq t \leq T$, $$ y\left(t\right) = e^{-t/RC} \left(\frac{1}{RC}\int_{0}^{t} {e^{\tau / RC}{\frac{1}{T}}}d\tau\right) = \frac{1}{T}\left(1 - e^{-t/RC}\right) $$

(2) $t > T$, $$ y\left(t\right) = e^{-t/RC} \left(\frac{1}{RC}\int_{0}^{T} {e^{\tau / RC}{\frac{1}{T}}}d\tau\right) = \frac{1}{T}\left(1 - e^{-T/RC}\right)e^{-\left(t-T\right)/RC} $$

From the above defintion of $v_{in}\left(t\right)$, we have

$$ \delta\left(t\right) = \lim_{T \to 0}v_{in}\left(t\right) $$

Thus, the impulse response of the RC circuit is,

$$ h\left(t\right) = \lim_{T \to 0}v_{out}\left(t\right) $$

Lets apply this limit to the two segments of $v_{out}\left(t\right)$,

(1) $0 \leq t \leq T$: In this interval as $T \to 0$, any value of $t \in \left[0, T\right]$ will tend to $T$. Thus, $$ \lim_{T \to 0} \frac{1}{T}\left(1 - e^{-T/RC}\right) = \lim_{T \to 0} \frac{1 - 1 + \frac{T}{RC} - \frac{1}{2!}\left(\frac{T}{RC}\right)^2 + \dots}{T} = \frac{1}{RC} $$

(2) $t > T$ $$ \lim_{T \to 0}\frac{1}{T}\left(1 - e^{-T/RC}\right)e^{-\left(t-T\right)/RC} = \lim_{T \to 0} \frac{1 - 1 + \frac{T}{RC} - \frac{1}{2!}\left(\frac{T}{RC}\right)^2 + \dots}{T}e^{-\left(t-T\right)/RC} = \frac{1}{RC}e^{-t/RC} $$

The above two piece of $v_{out}\left(t\right)$ merge as $T \to 0$, and the resulting impulse response is,

\begin{equation} h\left(t\right) = \frac{1}{RC}e^{-t/RC} \end{equation}

Exercises:

  1. Can you see if you can identify the impulse response from the following different equation for the output of the RC circuit? $$ v_{out}\left(t\right) = e^{-t/RC} \left(\frac{1}{RC}\int_{0}^{\infty} {e^{\tau / RC}{v_{in}\left(\tau\right)}}d\tau\right) $$

  2. Derive the impulse response for a system that performs a moving average on a given input signal. The input-output realtionship of a moving averaging system is as follows, $$ y\left(t\right) = \frac{1}{T}\int_{t-T}^{t}x\left(t\right)dt $$

  3. Prove that the convolution integral is commutative.

  4. What happens when you convolve a signal $x\left(t\right)$ with an impulse function $\delta\left(t\right)$?

3.4.1 Calculating the Convolution Integral

What exactly are we doing in the convolution integral? We appear to be multiplying two signals and then calculating the area under the resultant curve. We can gain a better understanding of the mechanics of this integral through a graphical illustration. Let us consider the following two signals $x_{1}\left(t\right)$ and $x_{2}\left(t\right)$,

$$ x_{1}\left(t\right) = \begin{cases} 1 & 0 \leq t \leq T_{1} \\ 0 & \mathrm{Otherwise} \end{cases} $$$$ x_{2}\left(t\right) = \begin{cases} 1 & 0 \leq t \leq T_{2} \\ 0 & \mathrm{Otherwise} \end{cases} $$

These two singals are illustrated in the following figure for $T_{1} = 1.0$ and $T_{2} = 2.0$.


In [6]:
# x1
def x1(t):
    return 1.0  if t >= 0 and t <= 1.0 else 0.

# x2
def x2(t):
    return 1.0  if t >=0 and t <= 2.0 else 0.

time = np.arange(-5.0, 5.0, 0.01)
x_1 = [ x1(t) for t in time ]
x_2 = [ x2(t) for t in time ]

figure(figsize=(12,2))
plot(time, x_1, label="$x_1$", lw=2)
plot(time, x_2, label="$x_2$", lw=2)
xlim(-5.0, 5.0)
ylim(-0.1, 1.1)
title('$x_1$ and $x_2$ signals that are to be convolved', fontsize=15)
xlabel('Time $(t)$', fontsize=15)
gca().yaxis.set_ticks(np.arange(-0, 1.1, 0.5))
legend(prop={'size':20});


The convolution of $x_1(t)$ and $x_2(t)$ is the following,

$$ y(t) = \int_{-\infty}^{\infty}x_1(p)x_2(t - p)dp $$

The first thing to note is the integration is performed with respect to a dummy variable $p$. The integral involves one signal being in its original form $(x_1)$, and the other signal being $x_2$ reflected and time shifted by $t$. The following steps illustrated the steps involved in calculating the convolution integral for time $t = 0.25$.


In [7]:
# x1
def x1(t):
    return 1.0  if t >= 0 and t <= 1.0 else 0.

# x2
def x2(t):
    return 1.0  if t >=0 and t <= 2.0 else 0.

# y
def y(x_1, x_2, time, dt):
    """ Calculates the convolution integral between 
    the two given signals."""
    y = np.zeros(len(time))
    _x1 = np.array(x_1)
    for i, _t in enumerate(time):
        _x2 = np.array([ x2(_t-p) for p in time ])
        y[i] = np.dot(_x1, _x2) * dt
    return y   

dt = 0.01
time = np.arange(-5.0, 5.0, 0.01)
x_1 = [ x1(t) for t in time ]
x_2 = [ x2(t) for t in time ]

# time at which the convlution integral is to calculated.
t = 0.5

fig, (ax1, ax2, ax3, ax4) = subplots(4, 1, figsize=(11,8), sharex=True)

#
# x1 remains unchanged.
ax1.plot(time, x_1)
ax1.fill_between(time, 0., x_1, facecolor='blue', alpha=0.2)
ax1.set_ylim(-0.1, 1.1)
ax1.set_title('$x_{1}$ remains unchanged', fontsize=17)
ax1.yaxis.set_ticks(np.arange(-0, 1.1, 0.5))

#
# Step1: Reflect and time shift x2
x_2_new = [ x2(t-p) for p in time ]
ax2.plot(time, x_2_new, color=sb.xkcd_rgb["pale red"])
ax2.fill_between(time, 0., x_2_new, facecolor='red', alpha=0.2)
ax2.set_ylim(-0.1, 1.1)
_title = ''.join(('Step 1: $x_{{2}}$ is reflected ',
                  'and shifted by $t$ to get $x_2({0} - p)$'))
ax2.set_title(_title.format(t),
              fontsize=17)
ax2.yaxis.set_ticks(np.arange(-0, 1.1, 0.5))

#
# Step2: Multiple x1 and the reflected and time-sifted x2
integrand = map(np.prod, zip(x_1, x_2_new))
ax3.plot(time, x_1)
ax3.fill_between(time, 0., x_1, facecolor='blue', alpha=0.2)
ax3.plot(time, x_2_new, color=sb.xkcd_rgb["pale red"])
ax3.fill_between(time, 0., x_2_new, facecolor='red', alpha=0.2)
ax3.plot(time, integrand, color='black', lw=2)
ax3.set_ylim(-0.1, 1.1)
ax3.set_title('Step 2: Multiply $x_1(p)$ and $x_2({0} - p)$'.format(t),
              fontsize=17)
ax3.yaxis.set_ticks(np.arange(-0, 1.1, 0.5))
ax3.set_xlabel('Time $(p)$', fontsize=15)

#
# Step 3: Calculate the integral.
y_t = np.sum(integrand) * dt
ax4.plot(time, y(x_1, x_2, time, dt), color='0.75', label='$y(t)$')
ax4.plot(t, y_t, 's', color='k', label='$y({0})$'.format(t))
ax4.set_title('Step 3: Calculate the integral.'.format(t),
              fontsize=17)
ax4.yaxis.set_ticks(np.arange(-0, 1.1, 0.5))
ax4.xaxis.set_ticks(np.arange(-5.0, 5.0, 0.5))
ax4.set_ylim(-0.1, 1.1)
ax4.set_xlim(-5.0, 5.0)
ax4.set_xlabel('Time $(t)$', fontsize=15)
legend(prop={'size':20});

tight_layout()


3.4.2 The what and why of impulse response

The above discussion on the impulse response was formal and relied on the use of mathematics to define impulse response. We also applied this definition to a simple RC circuit to mathematically derive its impulse response. Before we proceed further to discuss some of the properties of the impulse response of an LTI system, it would be nice to get an intuitive understanding of what the impulse response means, and why should it be the response to an impulse and not any other function. This should enable one to feel that they know something about the impulse response beyond its formal definition and how to calculate it mathematically.

What is the impulse response?

The impulse response is simply a weighting function that weights information in the input signal to an LTI system (the term information is used in loosely here). Given an impulse response $h(t)$, it tells us exactly how the different parts of the input contribute to the systems output.

$$ h(t) \rightarrow \begin{cases} \forall \,\,\, t < 0 & \rightarrow & \text{Weightage for the}\,future\,\text{input values.} \\ \,\,\, t = 0 & \rightarrow & \text{Weightage for the}\,present\,\text{input value.} \\ \forall \,\,\, t > 0 & \rightarrow & \text{Weightage for the}\,past\,\text{input values.} \\ \end{cases} $$

Why is the "impulse" response and not an "arbitrary" response?

In order to answer this question, let us first look at what a system does to an input signal. In general, a system performs operations on a given input signal, and its output at any given instant of time is determined by present, past and/or future values of the input signal. The exact nature of this response is given by the impulse response.

Given a system $H$, which is a blackbox, how does one determine the exact input-output relationship of the system? Let us assume that we are also told that $H$ is LTI, which means that output's dependence on the present, past and future input values is linear and it does not change with time. To unravel how the system operates on its input, one could apply an input $x(t)$ and observe is the corresponding output $y(t)$, and try to figure out from there what might be going on inside $H$. Great! But what input? The choice of this input $x(t)$ must be such that by simply observing the system's output to $x(t)$ we should be able to say with, ideally, no effort that exact dependence of the system's output on the input's present past and future. It turns out the best choice for such an input is the impulse function - a continuous-time signal with the shortest possible duration. When an impulse function of unit strength is applied to the input of system $H$ at time $t = 0$, and we observe the output $y(t)$ of the system for all time $t \in \left[-\infty, \infty\right]$, and let $y(-\infty) = 0$. In this case, any non-zero value of $y(t)$ for $\forall t$ would be purely the result of the input at $t = 0$, since the the impulse function is concentrated at $t = 0$.

3.4.2 LTI System Properties and the Impulse Response

In chapter 1, we had discussed the about the classification of systems based on some fundamental properties of systems, such as memory, stability, causality and inertibility. In the case of an LTI system, the impulse response has specific relationships to these four basic properties.

(A) Memory

A system is said to have memory if its current output depends on the present or the future values of its input. Thus a LTI system is said to be memoryless if its impulse response is an impulse function, i.e.

$$ h(t) = b\delta(t) $$

Any other function which is non-zero for $t \neq 0$ is a system with memory.

(B) Stability

A system is said to stable in the bounded-input bounded-output sense, if it produces finite output for finite input. Let $x(t)$ be a bounded input (i.e. $|x(t)| < B < \infty$) to the system $H$ with impulse response $h(t)$, and let $y(t)$ be the corresponding output.

$$ y(t) = H\{x(t)\} = \int_{-\infty}^{\infty}x(\tau)h(t - \tau)d\tau \leq \int_{-\infty}^{\infty}|x(\tau)h(t - \tau)|d\tau \leq \int_{-\infty}^{\infty}B|h(t - \tau)|d\tau $$

Thus, when $y(t)$ is bounded then $H$ is stable, i.e.

$$ |y(t)| < \infty \implies \int_{-\infty}^{\infty}|h(t)|dt < \infty $$

(C) Causality

A system $H$ is causal if its output does not depend on future values of the input. This translates to the following requirement on the impulse response $$ h(t) = 0 \,\,\, \forall t < 0 $$

Exercises:

  1. Derive the impulse response of the following system: $$ y(t) = \int_{-\infty}^{t+T}x(\tau)d\tau $$ (a) Is this system stable? (b) Is this system causal?

3.3.2 Differential Equations

Linear constant coefficient differential equations are another representation for LTI systems. The differential equation representation can be obtained from the knowledge of physics of the system.

$$ \sum_{k=0}^{N}a_{k}\frac{d^k}{dt^k}y(t) = \sum_{k=0}^{M}b_{k}\frac{d^k}{dt^k}x(t) $$

where, $a_k$, $b_k$ are constant coefficients of the system, $x(t)$ is the input and $y(t)$ is the output. Often $N \geq M$, and $N$ is called the order of the system.

The solution of the above equation $y(t)$ for a given input $x(t)$ requires information about the initial conditions of the system. The number of initial conditions requires to solve the differential is equal to the order of the equation $N$.

$$ y(t) \bigg|_{t=0^{-}}, \frac{d}{dt}y(t) \bigg|_{t=0^{-}}, \frac{d^2}{dt^2}y(t) \bigg|_{t=0^{-}}, \cdots \frac{d^N}{dt^N}y(t) \bigg|_{t=0^{-}} $$

The initial values of the system correspond to the initial states of the energy storing elements in the system, and they provide all the information about the system's past that are required to calculate the system's current and future outputs.

Exercises:

  1. Derive the differential equations of a series RLC circuit, where the input voltage $v(t)$ is the input and $i(t)$, the current in the circuit, is the output.

3.3.3 Block Diagram Representation

The block diagram representation of a system is a detailed representation that describes the internal structure of the system and its operations. A block diagram representation make use of blocks that represent elementary opertions on signals, and the interconnections of these elementary operations results in the overall system. It is important to learn about block diagram representation of system in order to be able to physically implement systems of interest. We will talk more about this form of representation of system, when we talk about the implementation of discrete-time LTI systems.

Consider the an LTI system described by the following differential equation, $$ \sum_{k=0}^{N}a_{k}\frac{d^k}{dt^k}y(t) = \sum_{k=0}^{M}b_{k}\frac{d^k}{dt^k}x(t) $$

The above equation can be rearranged to get the following form, which can be used to implement the equation, $$ y(t) = \frac{1}{a_0}\sum_{k=0}^{M}b_{k}\frac{d^k}{dt^k}x(t) - \frac{1}{a_0}\sum_{k=1}^{N}a_{k}\frac{d^k}{dt^k}y(t) $$

In the above equation, we are performing scala multiplication, additiion and differentiation of $x(t)$ and $y(t)$ to obtain the output. Thus, we should be able to implement any continuous-time LTI system with the following three elementary blocks.

  1. Addition block that takes in a set of input $x_i(t)$ and produces and output $y(t)$ that is sum of the inputs. $$ y(t) = \sum_{i}x_i(t) $$

  2. Scalar multiplication block multiplies the input signal $x(t)$ by a scalar constant $c$. $$ y(t) = cx(t) $$

  3. Differentiator block that estimates the derivative of the given input signal. $$ y(t) = \frac{d}{dt}x(t)$$ However, in practice a differentiator is replaced by an integrator block, which produces and output that is integral of the given input. $$ y(t) = \int_{-\infty}^{t}x(\tau)d\tau $$ An integrator is used instead of a differentiator because it is easier to realize an integrator with analog components than differentiators, and also differentiator blocks tend to amplitude noise in the system, while integrators smooth them out. This means that we will need to convert our differential equation representation to a integral equation.

The second fundamental theorem of calculus states that, a fucntion $F(t)$ has a derivative $f(t)$, if $$ F(t) = \int_{a}^{t}f(x)dx $$

We can use this property to define repreated integrals similar to repeated differentiations. Let y^{(n)}(t) represent the $n^{th}$ integral of $y(t) = y^{(0)}(t)$, then $$ y^{(n)}(t) = \int_{-\infty}^{t}y^{(n-1)}(x)dx \implies \frac{d}{dt}y^{(n)}(t) = y^{(n-1)}(t)$$

We can now replace the differential equation into of the LTI system shown above to an integral equation using $y^{(n)}$s, by integrating the equation $N$ times, $$ \sum_{k=0}^{N}a_{k}y^{(N-k)}(t) = \sum_{k=0}^{M}b_{k}x^{(N-k)}(t) $$

Exercise : Verfiy the above integral equation.

Consider a second order equation were, $N=2$ and also $M = 2$. $$ a_2\frac{d^2}{dt^2}y(t) + a_1\frac{d}{dt}y(t) + a_0y(t) = b_2\frac{d^2}{dt^2}x(t) + b_1\frac{d}{dt}x(t) + b_0x(t) $$

This can be replaced by the integral equation, $$ a_2y^{(0)}(t) + a_1y^{(1)}(t) + a_0y^{(2)}(t) = b_2x^{(0)}(t) + b_1x^{(1)}(t) + b_0x^{(2)}(t) $$

Thus, $$ y(t) = \frac{1}{a_2} \left(b_2x^{(0)}(t) + b_1x^{(1)}(t) + b_0x^{(2)}(t) - a_1y^{(1)}(t) - a_0y^{(2)}(t)\right) $$

The above integral equation can be realized as follows.

The above given realization of the integral equation is the Direct Form I, and there are several other forms of realization which will be discussed in more details in later sections of the book.

3.3.4 State-space description

The state-space representation converts a $N^{th}$ order differential equation into a $1^{st}$ order coupled equation, which describe how the states of the system evolve with time, and how the output of the system changes with the system states and its input. The main advantage of the state space representation is that the system equations can be written in matrix form, which allows one to use tools from linear algebra to analyse and design systems.

The idea of a state is fundamental to the state-space reprseentation. The state of a system $\mathbf{x}(t)$ is a set of variables that at any given time contain all the information about the system's past. The variables that constinute the state of a system are not unique, and there are several possible choices in choosing the variables that consitute a system's state.

The state-space representation of a continuous-time LTI system is as follows,

$$ \frac{d}{dt}\mathbf{x}(t) = \mathbf{A}\mathbf{x}(t) + \mathbf{B}u(t) $$$$ y(t) = \mathbf{C}\mathbf{x}(t) + \mathbf{D}u(t) $$

The matrices $\mathbf{A}$, $\mathbf{B}$, $\mathbf{C}$, and $\mathbf{D}$ describe the nature of the system and its structure, $u(t)$ is the system input and $y(t)$ is the system output.

Let us now take a look at how we can repesent the following $2^{nd}$ order system in the space-space form,

$$ a_2\frac{d^2}{dt^2}y(t) + a_1\frac{d}{dt}y(t) + a_0y(t) = bu(t) $$

We can convert the above equation into a $1^{st}$ order coupled differential equation by defining the following variables,

$$x_1(t) = y(t)$$$$x_2(t) = \frac{d}{dt}x_1(t) = \frac{d}{dt}y(t)$$$$ \frac{d^2}{dt^2}y(t) = \frac{1}{a_2} \left( -a_1\frac{d}{dt}y(t) - a_0y(t) + bu(t) \right) $$$$ \frac{d}{dt}x_2(t) = \frac{-a_1}{a_2}x_2(t) + \frac{-a_0}{a_2}x_1(t) + \frac{b}{a_2}u(t)$$

If we take $x_1$ and $x_2$ to be the state variables, then

$$\mathbf{x}(t) = \begin{pmatrix} x_1(t) \\ x_2(t) \end{pmatrix}$$

Then, we have

$$\frac{d}{dt}\mathbf{x}(t) = \begin{pmatrix} 0 & 1 \\ -a_0/a_2 & -a_1/a_2 \end{pmatrix} \mathbf{x}(t) + \frac{b}{a_2} u(t) $$$$y(t) = \begin{pmatrix} 1 & 0 \end{pmatrix} \mathbf{x}(t) $$

Here, $\mathbf{A} = \begin{pmatrix} 0 & 1 \\ -a_0/a_2 & -a_1/a_2 \end{pmatrix}$, $\mathbf{B} = \frac{b}{a_2}$, $\mathbf{C} = \begin{pmatrix} 1 & 0 \end{pmatrix}$ and $\mathbf{D} = 0$.

References

  1. Haykin, Simon, and Barry Van Veen. Signals and systems. John Wiley & Sons, 2007.
  2. Weisstein, Eric W. "Fundamental Theorems of Calculus." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/FundamentalTheoremsofCalculus.html

In [1]:
from IPython.core.display import HTML
def css_styling():
    styles = open("../styles/custom.css", "r").read()
    return HTML(styles)
css_styling()


Out[1]: