Introduction to Signal Processing

Version 0.1

Chapter 2: Continuous-time Concepts: Signals


Following the definition and classification of signals and systems presented in the last chapter, we preceed forward with continuous-time signals and systems. This is the most natural place to start as we are quite familiar with the idea of continuous entities, e.g. continuous time and space. We learned our basic physics (Newtonian mechanics, electromagnetics etc.) assuming that space and time are continuous quatities. In this chapter and the following, we will familiarize ourselves with important ideas in continuous-time, which will later help us to make the jump to discrete-time.

This chapter introduces some prelimianry continuous-time signals that often feature in signal processing and other related fields. Many of these signals also are found in nature, and are useful in modelling physical systems. Familiarizing oneself with these elementary signals will help in many of the concepts that will developed in the next two chapters about continuous-time systems and tranformation of continuous-time signals.

2.1 Exponential Signals

A real exponential signal is written as the following,

$$ x\left(t\right) = Ae^{bt}, $$

where, $t$ is time, $A$ is the amplitude of the signal when $t = 0$, and $b$ the parameter indicate whether the $x\left(t\right)$ is an exponentially growing signal $\left(b > 0\right)$ or a exponentially decaying signal $\left(b < 0\right)$. When $b = 0$, then we get a constant signal $x\left(t\right) = B, \,\, \forall t$. For a real exponential, the parameters $A$ and $b$ are real numbers.

The plot of $x\left(t\right)$ for different values of b is given in the following figure.


In [1]:
%pylab inline
import seaborn as sb

t = np.arange(-2.0, 2.0, 0.01)
figure(figsize=(14,3))
subplot2grid((1,2), (0,0), rowspan=1, colspan=1)
A, b = 1.4, 0.8 
plot(t, A * np.exp(b * t), label="${0}e^{{ {1}t }}$".format(A, b))
xlabel('Time', fontsize=15)
legend(prop={'size':20})

subplot2grid((1,2), (0,1), rowspan=1, colspan=1)
A, b = 2.1, -1.1
plot(t, A * np.exp(b * t), label="${0}e^{{ {1}t }}$".format(A, b))
xlabel('Time', fontsize=15)
legend(prop={'size':20});


Populating the interactive namespace from numpy and matplotlib

Exponential signals are observed in physical systems. The reader should remember the RC circuit that was discussed in the last chapter, where the capacitor voltage had an exponential decay when the voltage was switched from 1 to 0.

2.2 Sinuisoidal Signals

The general form of a continuous-time sinuisoidal signals is given by the following,

$$ x\left(t\right) = A\sin\left(\omega t + \phi\right), $$

Where $A$ is the amplitude of the sinuisoidal signal, $\omega$ is the angular frequency (radian per sec), and $\phi$ is the phase angle in radians. The sinuisoid is an example of a periodic signal with the fundamental period $T$,

$$ T = \frac{2\pi}{\omega} $$

Please verify that $T$ is the fundamental period of $x\left(t\right)$.


In [2]:
t = np.arange(-2.0, 2.0, 0.01)
figure(figsize=(14,3))
# sinuisoid parameters
A = 1.4
f = 2. # omega = 2 * pi * f
p = 6. # phi = pi / p

lbl_txt = "${0}\,\sin({1}\pi t + \pi / {2})$"
plot(t, A * np.sin(2 * np.pi * f * t + np.pi / p), 
     label=lbl_txt.format(A, 2*f, p))
xlabel('Time', fontsize=15)
ylim(-2, 3)
legend(prop={'size':20});


The exponential signal was earlier introduced as the real exponential, as the parameters of the signal are real numbers. In fact a more general form of exponential singals is the complex exponential, where the parameters are allowed to be complex numbers. A complex exponential can be witten as,

$$ x\left(t\right) = Ae^{bt}, \, t \in \mathbb{R}, \,\, A, b \in \mathbb{C} $$

Consider a complex exponential signal $e^{j\omega t}$. This can written as the following using the Euler identity. $$ e^{j\omega t} = \cos\left(\omega t\right) + j\sin\left(\omega t\right) $$

Thus a continuous-time sinuisoid can be written as the real or imaginary part of the complex exponential.

$$ \cos\left(\omega t + \phi\right) = \Re\left(e^{j\omega t + \phi}\right) $$$$ \sin\left(\omega t + \phi\right) = \Im\left(e^{j\omega t + \phi}\right) $$

Another representation of sinuisoidal signals in terms of complex exponential is the following,

$$ \cos\left(\omega t + \phi\right) = \frac{e^{\left(j\omega t + \phi\right)} + e^{-\left(j\omega t + \phi\right)}}{2} $$$$ \sin\left(\omega t + \phi\right) = \frac{e^{\left(j\omega t + \phi\right)} - e^{-\left(j\omega t + \phi\right)}}{2j} $$

2.3 Exponential Sinuisoidal Signals

A natural extention of the representation of sinuisoids in terms of complex exponentials is to combine the real and complex exponentials that results in exponential sinuisoidal signals. A exponentially decayed sinuisoid is obtained by multiplying a sinuisoidal singal $\left(\sin\left(\omega t + \phi\right)\right)$ by a real exponential signal $\left(e^{bt}\right)$,

$$ x\left(t\right) = Ae^{bt}\sin\left(\omega t + \phi\right) $$

In [3]:
t = np.arange(-2.0, 2.0, 0.01)
figure(figsize=(14,3))
# sinuisoid parameters
A = 1.4
f = 2. # omega = 2 * pi * f
p = 6. # phi = pi / p

lbl_txt = "${1}e^{{ {0}t }}\sin({2}\pi t + \pi / {2})$"

figure(figsize=(14,3))
# exponential parameter
b = 0.67
subplot2grid((1,2), (0,0), rowspan=1, colspan=1)
x = np.exp(b * t) * A * np.sin(2 * np.pi * f * t + np.pi / p)
plot(t, x, label=lbl_txt.format(b, A, 2*f, p))
plot(t, np.exp(b * t) * A, color=sb.xkcd_rgb["pale red"], linestyle='--')
plot(t, -np.exp(b * t) * A, color=sb.xkcd_rgb["pale red"], linestyle='--')
xlabel('Time', fontsize=15)
ylim(-6, 9)
legend(prop={'size':20})

# exponential parameter
b = -1.02
subplot2grid((1,2), (0,1), rowspan=1, colspan=1)
x = np.exp(b * t) * A * np.sin(2 * np.pi * f * t + np.pi / p)
plot(t, x, label=lbl_txt.format(b, A, 2*f, p))
plot(t, np.exp(b * t) * A, color=sb.xkcd_rgb["pale red"], linestyle='--')
plot(t, -np.exp(b * t) * A, color=sb.xkcd_rgb["pale red"], linestyle='--')
xlabel('Time', fontsize=15)
ylim(-10, 11)
legend(prop={'size':20});


<matplotlib.figure.Figure at 0x7f45842d6950>

2.4 Impulse function

The first and most important fact to remember about the impulse function is that it is not an ordinary function. The impulse function $\delta\left(t\right)$, also know as the Dirac delta function, is not characterized by the exact values it takes for the different values of its argument, rather it is characterized by the following properties,

$$ \int_{a}^{b}\delta\left(t\right)dt = \begin{cases} 1, & 0 \in \left[a, b\right] \\ 0, & \mathrm{Otherwise} \end{cases} $$

For any ordinary function $f\left(t\right)$, which is continuous at $t = 0$, $$ \int_{-\infty}^{\infty}f\left(t\right)\delta\left(t\right)dt = f\left(0\right) $$

The first property tells that use the impulse function is concentrated around the origin $t = 0$, while the second property indicates that the impulse functions operates like a value selector for an ordinary function.

One way to think of the impulse function in terms of ordinary functions is to see it a limit of a sequence or family of functions. One sequence that is commonly encountered in books on signals and systems is the following,

$$ f_{n}\left(t\right) = \begin{cases} n, & -\frac{1}{2n} \leq t \leq \frac{1}{2n} \\ 0, & \mathrm{Otherwise} \end{cases} $$

This is a rectangular function that growns taller as $n \to \infty$. Now, lets take $f_{n}\left(t\right)$ and apply it on an orginary function $g\left(t\right) = $, which we call $g_{n}$.

$$ \int_{-\infty}^{\infty}f_{n}\left(t\right)g\left(t\right)dt = \int_{-\frac{1}{2n}}^{\frac{1}{2n}}ng\left(t\right)dt = g_n $$

Now, if apply the limit on the sequence $f_{n}$, then we get $$ \lim_{n\to\infty}\int_{-\infty}^{\infty}f_{n}\left(t\right)g\left(t\right)dt = \lim_{n\to\infty}g_{n} = g\left(0\right) $$

This is demonstrated in the following.


In [4]:
# Ordinary function.
def g_value(t):
    return 13.0 + 5.25 * np.sin(t - np.pi/2.5)

# Impulse function sequence
def f_value(t, n):
    return n if (t <= (1./(2*n)) and t >= -1./(2*n)) else 0.

dt = 0.001
time = np.arange(-1., 4.0, dt)
g = [g_value(t) for t in time]
f_4 = [f_value(t, 4) for t in time]
f_8 = [f_value(t, 8) for t in time]
f_16 = [f_value(t, 16) for t in time]

figure(figsize=(14,3))
# subplot2grid((0,2), (0,0), rowspan=1, colspan=1)
plot(time, g, label="$g$")
plot(time, f_4, label="$f_{4}$")
plot(time, f_8, label="$f_{8}$")
plot(time, f_16, label="$f_{16}$")
xlabel('Time', fontsize=15)
ylim(-0.5, 20)
legend(prop={'size':20});


As $n \to \infty$, the functions $f_{n}$ are more concentrated around the point $t = 0$, and the value of the function at this point $f_{n}\left(0\right) \to \infty$. Thus, $\delta\left(t\right)$ must be singular at $t=0$. This problem arises when the impulse function is seen as an ordinary function, which it is not. The impulse functions must always be seen something that makes sense only as an operator applied on ordinary functions.

The impulse function, although a purely theoretical construct, has very many applications in the applied mathematics and its allied fields. It plays a crucial role in the theory of linear systems (as will be seen in the rest of the book), in representing idealized quantities such a point masses or point charges in physics, forces in instantaneous collisions, and also for representing derivative of step discontinuities.

2.5 Step Function

The step function is defined using the impulse function as the following,

$$ u\left(t\right) = \int_{-\infty}^{t}\delta\left(\tau\right)d\tau = \begin{cases} 1, & t > 0 \\ 0, & t < 0 \end{cases} $$

This function has a step discontinuity at $t = 0$.

References

  1. Haykin, Simon, and Barry Van Veen. Signals and systems. John Wiley & Sons, 2007.
  2. Papoulis, Athanasios. The Fourier integral and its applications. McGraw-Hill, 1962.
  3. http://www.physics.iitm.ac.in/~labs/dynamical/pedagogy/vb/delta.pdf
  4. http://home.comcast.net/~szemengtan/LinearSystems/distributions.pdf

In [1]:
from IPython.core.display import HTML
def css_styling():
    styles = open("../styles/custom.css", "r").read()
    return HTML(styles)
css_styling()


Out[1]: