In [1]:
# relevant imports
import numpy as np
from matplotlib import pyplot as plt
%matplotlib notebook
from ptsa.data.timeseries import TimeSeries
In real applications, you will most likely have your own timeseries data for analysis. For the purpose of illustrating the functionalities of the timeseries class, we will construct sinusoids as our timeseries data. Let's create an array of 5000 data points, or samples. Suppose the sampling rate is 10Hz, this means that our timeseires is 5000/10=500 seconds long.
In [2]:
num_points = 5000
sample_rate = 10.
# We can specify the timestamps for each data point, from 0s to 500s.
t = np.linspace(1, num_points, num_points) / sample_rate
# Let's create two noisy sinusoids with different frequencies.
frequency1 = .5 # 1 cycle every 2 seconds
frequency2 = .1 # 1 cycle every 10 seconds
data1 = np.sin(2*np.pi*frequency1*t) + np.random.uniform(-0.5, 0.5, num_points)
data2 = np.sin(2*np.pi*frequency2*t) + np.random.uniform(-0.5, 0.5, num_points)
Let's check our timepoints.
In [3]:
print ('First 5 timestamps: ', t[:5])
print ('Last 5 timestamps: ', t[-5:])
We can also visualize the timeseries using matplotlib.
In [4]:
plt.figure(figsize=[10,2])
plt.plot(t, data1, label='%sHz'%str(frequency1))
plt.plot(t, data2, label='%sHz'%str(frequency2))
plt.legend()
Out[4]:
As we zoom in to the data array, the random noise we added to the sinusoids becomes clear.
In [5]:
plt.figure(figsize=[10, 2])
plt.plot(t[500:1000], data1[500:1000], label='%sHz'%str(frequency1))
plt.plot(t[500:1000], data2[500:1000], label='%sHz'%str(frequency2))
plt.legend()
Out[5]:
The TimeSeries class is a convenient wrapper of xarray that offers basic functionalities for timeseries analysis. Although we focus our analysis here in the context of timeseries data, many of the following examples can be easily extended to non-timeseries, multidimensional data. To create a TimeSeries object, we simply need to construct dimensions and the corresponding coordinates in each dimension.
In [6]:
# Let's stack the two time-series data arrays.
data = np.vstack((data1, data2))
# and construct the TimeSeries object
ts = TimeSeries(data,
dims=('frequency', 'time'),
coords={'frequency':[frequency1, frequency2],
'time':t,
'samplerate':sample_rate})
print (ts)
TimeSeries also has a convenient plotting method, inherited from xarray.
Note: since the frequency dimension has float coordinates, we want to be careful for exact float comparisons. Thus, instead of using *ts.sel(frequency=frequency1)*, we use *ts.sel(frequency=ts.frequency[0])*. See more about the *.sel()* method in later sections.
In [7]:
plt.figure(figsize=[10,2])
ts.sel(frequency=ts.frequency[0]).plot()
Out[7]:
In [8]:
ts
Out[8]:
In [9]:
# use to_hdf to save data
fname = 'my_ts_data.h5'
ts.to_hdf(fname)
In [10]:
# use from_hdf to load data
ts = TimeSeries.from_hdf(fname)
print (ts)
In [11]:
# Select the data from 100s to 200s
ts.sel(time=(ts.time>100)&(ts.time<200))
Out[11]:
In [12]:
# mean over the time dimension
ts.mean('time')
Out[12]:
In [13]:
ts_transpose = ts.T
print (ts_transpose)
We can try adding the original timeseries and its "transposed" version together. Because their coordinates match, operations can be performed between them even though the data arrays are technically of different shapes. Note that because the transposed version of the data has the same value at every coordinate, by adding them together we're effectively doubling the values.
In [14]:
results = ts + ts_transpose
plt.figure(figsize=[10, 3])
# we'll only plot the first 250 samples to see things clearly
plt.plot(ts.time[:250], ts.sel(frequency=ts.frequency[0])[:250], label='original')
plt.plot(results.time[:250], results.sel(frequency=ts.frequency[0])[:250], label='original+transposed')
plt.ylim([-4,4])
plt.legend(ncol=2)
Out[14]:
Without the coordinates, the data cannot be added because their shapes wouldn't match.
In [15]:
try:
print (data+data.T)
except Exception as e:
print ('Error: ' + str(e))
Note that sometimes, this feature can produce undesired results. Suppose that we want to subtract a subset of our data from another subset of our data, because xarray matches coordinates, it will drop all unmatched coordinates and return 0 at each matched coordinate.
In [16]:
# Recall that our data contains 5000 points.
# Let's select the first 3000 points and the last 3000 points and subtract one from the other.
# The result is all zeros because unmatched coordinates are dropped,
# and the value at each of the 1000 overlapping coordinates is subtracted from itself.
early_ts = ts.sel(time=ts.time[:3000])
late_ts = ts.sel(time=ts.time[2000:])
print (late_ts - early_ts)
In [17]:
# However, if we only subtract the underlying data arrays,
# it will return the actual difference of the early subset and the late subset of our timeseries.
print (late_ts.values - early_ts.values)
In [18]:
original = ts.sel(frequency=ts.frequency[0]).sel(time=ts.time<50.0)
downsampled = original.resampled(resampled_rate=2.0)
In [19]:
plt.figure(figsize=[10, 4])
plt.plot(original.time, original, color='C0', label='10.0Hz samplerate (original)')
plt.plot(downsampled.time, downsampled, color='C6', label='2.0Hz samplerate (downsampled)')
plt.legend()
Out[19]:
However, as we downsample to lower frequencies, we lose precision of the signal.
Read more about the relationship between sampling rate and observed frequencies [here](https://en.wikipedia.org/wiki/Nyquist_frequency).
In [20]:
original = ts.sel(frequency=ts.frequency[0]).sel(time=ts.time<50.0)
downsampled = original.resampled(resampled_rate=1.0)
plt.figure(figsize=[10, 4])
plt.plot(original.time, original, color='C0', label='10.0Hz samplerate (original)')
plt.plot(downsampled.time, downsampled, color='C6', label='1.0Hz samplerate (downsampled)')
plt.legend()
Out[20]:
In [21]:
# TimeSeries updates the samplerate for you
print (original.samplerate)
print (downsampled.samplerate)
In [22]:
freq1 = 3.2
freq2 = 1.6
freq3 = 0.2
data1 = np.sin(2*np.pi*freq1*t)
data2 = np.sin(2*np.pi*freq2*t)
data3 = np.sin(2*np.pi*freq3*t)
# our data are simply the sum of the three sinusoids
data = data1 + data2 + data3
In [23]:
ts = TimeSeries(data, dims=('time'), coords={'time':t, 'samplerate':sample_rate})
print (ts)
In [24]:
# Let's plot the first 200 samples of the data
fig, ax = plt.subplots(4, figsize=[10, 4], sharex=True, sharey=True)
ax[0].plot(t[:200], data1[:200])
ax[1].plot(t[:200], data2[:200])
ax[2].plot(t[:200], data3[:200])
ax[3].plot(t[:200], ts[:200])
ax[0].set_ylabel('3.2Hz')
ax[1].set_ylabel('1.6Hz')
ax[2].set_ylabel('0.2Hz')
ax[3].set_ylabel('sum')
ax[3].set_xlabel('Time(s)')
Out[24]:
1. To filter out the component with the highest frequency (3.2Hz), we'll use a lowpass filter. A lowpass filter perserves any frequency that is lower than the given frequency. 2. To filter out the component with the lowest frequency (0.2Hz), we'll use a highpass filter. A highpass filter perserves any frequency that is higher than the given frequency. 3. To filter out the component with the middle frequency (1.6Hz), we'll use a bandstop filter. A bandstop filter perserves any frequency that is outside of the given frequency range.
Note that these filters suffer from edge effects at both ends of the timeseries.
In [25]:
# lowpass filter
filtered_data = ts.filtered(3.0, filt_type='lowpass', order=4)
fig, ax = plt.subplots(4, figsize=[10, 5], sharex=True, sharey=True)
ax[0].plot(t[:200], ts[:200]) # origianl timeseries
ax[1].plot(t[:200], filtered_data[:200]) # losspass filtered
ax[2].plot(t[:200], (data2+data3)[:200]) # what we should get (mid + low frequencies)
ax[3].plot(t[:200], (data2+data3-filtered_data)[:200]) # the difference between what we should get and what we got should be close to zero
ax[0].set_ylabel('unfiltered')
ax[1].set_ylabel('filtered')
ax[2].set_ylabel('mid + low')
ax[3].set_ylabel('(mid+low)-filtered')
ax[3].set_xlabel('Time(s)')
Out[25]:
In [26]:
# highpass filter
filtered_data = ts.filtered(0.5, filt_type='highpass', order=4)
fig, ax = plt.subplots(4, figsize=[10, 5], sharex=True, sharey=True)
ax[0].plot(t[:200], ts[:200]) # origianl timeserids
ax[1].plot(t[:200], filtered_data[:200]) # highpass filtered
ax[2].plot(t[:200], (data2+data1)[:200]) # what we should get (mid + high frequencies)
ax[3].plot(t[:200], (data2+data1-filtered_data)[:200]) # the difference between what we should get and what we got should be close to zero
ax[0].set_ylabel('unfiltered')
ax[1].set_ylabel('filtered')
ax[2].set_ylabel('mid + high')
ax[3].set_ylabel('(mid+high)-filtered')
ax[3].set_xlabel('Time(s)')
Out[26]:
In [27]:
# bandstop filter
filtered_data = ts.filtered([1.4, 1.8], filt_type='stop', order=4)
fig, ax = plt.subplots(4, figsize=[10, 5], sharex=True, sharey=True)
ax[0].plot(t[:200], ts[:200]) # origianl timeserids
ax[1].plot(t[:200], filtered_data[:200]) # bandstop filtered
ax[2].plot(t[:200], (data1+data3)[:200]) # what we should get (high + low frequencies)
ax[3].plot(t[:200], (data1+data3-filtered_data)[:200]) # the difference between what we should get and what we got should be close to zero
ax[0].set_ylabel('unfiltered')
ax[1].set_ylabel('filtered')
ax[2].set_ylabel('high + low')
ax[3].set_ylabel('(high+low)-filtered')
ax[3].set_xlabel('Time(s)')
Out[27]: