In [ ]:
%matplotlib inline
This example illustrates how to generate source estimates and simulate raw data
using subject anatomy with the :class:mne.simulation.SourceSimulator class.
Once the raw data is simulated, generated source estimates are reconstructed
using dynamic statistical parametric mapping (dSPM) inverse operator.
In [ ]:
# Author: Ivana Kojcic <ivana.kojcic@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com>
# Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
print(__doc__)
# To simulate the sample dataset, information of the sample subject needs to be
# loaded. This step will download the data if it not already on your machine.
# Subjects directory is also set so it doesn't need to be given to functions.
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
meg_path = op.join(data_path, 'MEG', subject)
# First, we get an info structure from the sample subject.
fname_info = op.join(meg_path, 'sample_audvis_raw.fif')
info = mne.io.read_info(fname_info)
tstep = 1 / info['sfreq']
# To simulate sources, we also need a source space. It can be obtained from the
# forward solution of the sample subject.
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
src = fwd['src']
# To simulate raw data, we need to define when the activity occurs using events
# matrix and specify the IDs of each event.
# Noise covariance matrix also needs to be defined.
# Here, both are loaded from the sample dataset, but they can also be specified
# by the user.
fname_event = op.join(meg_path, 'sample_audvis_raw-eve.fif')
fname_cov = op.join(meg_path, 'sample_audvis-cov.fif')
events = mne.read_events(fname_event)
noise_cov = mne.read_cov(fname_cov)
# Standard sample event IDs. These values will correspond to the third column
# in the events matrix.
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
In order to simulate source time courses, labels of desired active regions need to be specified for each of the 4 simulation conditions. Make a dictionary that maps conditions to activation strengths within aparc.a2009s [1]_ labels. In the aparc.a2009s parcellation:
In each of the 4 conditions, only the primary area is activated. This means that during the activations of auditory areas, there are no activations in visual areas and vice versa. Moreover, for each condition, contralateral region is more active (here, 2 times more) than the ipsilateral.
In [ ]:
activations = {
'auditory/left':
[('G_temp_sup-G_T_transv-lh', 100), # label, activation (nAm)
('G_temp_sup-G_T_transv-rh', 200)],
'auditory/right':
[('G_temp_sup-G_T_transv-lh', 200),
('G_temp_sup-G_T_transv-rh', 100)],
'visual/left':
[('S_calcarine-lh', 100),
('S_calcarine-rh', 200)],
'visual/right':
[('S_calcarine-lh', 200),
('S_calcarine-rh', 100)],
}
annot = 'aparc.a2009s'
# Load the 4 necessary label names.
label_names = sorted(set(activation[0]
for activation_list in activations.values()
for activation in activation_list))
region_names = list(activations.keys())
# Define the time course of the activity for each region to activate. We use a
# sine wave and it will be the same for all 4 regions.
source_time_series = np.sin(np.linspace(0, 4 * np.pi, 100)) * 10e-9
Here, :class:~mne.simulation.SourceSimulator is used, which allows to
specify where (label), what (source_time_series), and when (events) event
type will occur.
We will add data for 4 areas, each of which contains 2 labels. Since add_data method accepts 1 label per call, it will be called 2 times per area. All activations will contain the same waveform, but the amplitude will be 2 times higher in the contralateral label, as explained before.
When the activity occurs is defined using events. In this case, they are taken from the original raw data. The first column is the sample of the event, the second is not used. The third one is the event id, which is different for each of the 4 areas.
In [ ]:
source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep)
for region_id, region_name in enumerate(region_names, 1):
events_tmp = events[np.where(events[:, 2] == region_id)[0], :]
for i in range(2):
label_name = activations[region_name][i][0]
label_tmp = mne.read_labels_from_annot(subject, annot,
subjects_dir=subjects_dir,
regexp=label_name,
verbose=False)
label_tmp = label_tmp[0]
amplitude_tmp = activations[region_name][i][1]
source_simulator.add_data(label_tmp,
amplitude_tmp * source_time_series,
events_tmp)
# To obtain a SourceEstimate object, we need to use `get_stc()` method of
# SourceSimulator class.
stc_data = source_simulator.get_stc()
Project the source time series to sensor space. Three types of noise will be added to the simulated raw data:
The :class:~mne.simulation.SourceSimulator can be given directly to the
:func:~mne.simulation.simulate_raw function.
In [ ]:
raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd,
cov=None)
raw_sim.set_eeg_reference(projection=True).crop(0, 60) # for speed
mne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0)
mne.simulation.add_eog(raw_sim, random_state=0)
mne.simulation.add_ecg(raw_sim, random_state=0)
# Plot original and simulated raw data.
raw_sim.plot(title='Simulated raw data')
Here, source time courses for auditory and visual areas are reconstructed separately and their difference is shown. This was done merely for better visual representation of source reconstruction. As expected, when high activations appear in primary auditory areas, primary visual areas will have low activations and vice versa.
In [ ]:
method, lambda2 = 'dSPM', 1. / 9.
epochs = mne.Epochs(raw_sim, events, event_id)
inv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov)
stc_aud = mne.minimum_norm.apply_inverse(
epochs['auditory/left'].average(), inv, lambda2, method)
stc_vis = mne.minimum_norm.apply_inverse(
epochs['visual/right'].average(), inv, lambda2, method)
stc_diff = stc_aud - stc_vis
brain = stc_diff.plot(subjects_dir=subjects_dir, initial_time=0.1,
hemi='split', views=['lat', 'med'])