Expert level

Welcome to the expert level! For this level, I'm assuming you are somewhat familiar with the Python programming language, for example by completing the 'adept' level, or having done some other projects in Python. Alternatively, if you have enough experience in other programming languages, you can follow along. If some aspect of the Python programming language is unclear, I found Jake VanderPlas' "A whirlwind tour of Python" a great reference to quickly look things up.

During the next half hour, we are going to construct a basic pipeline that begins with some raw MEG data and performs a minimum-norm estimate (MNE) of the cortical sources.

The experiment

I have prepared a bit of MEG data for you. It was recorded while a brave volunteer was in the scanner, listening to auditory beeps and looking at visual checkerboards. The volunteer was staring at a "fixation cross" in the center of a screen in front of him/her. From time to time, beeps sounded from either the left or the right side of his/her head. At other times, a checkerboard would be displayed either to the left or right side of to the cross. The volunteer was instructed to keep his/her eyes fixed on the cross and not directly look at the checkerboards. This way, the checkerboard was presented either to the left or right visual field of the volunteer.

By analyzing the MEG signal, we should be able to see the activity in the auditory and visual cortices.

Some housekeeping

First order of business is to import the MNE-Python module mne and configure the graphics engine to send all figures to the browser. Executing the cell below will accomplish this.


In [ ]:
%matplotlib notebook

# Import the MNE-Python module, which contains all the data analysis routines we need
import mne
print('MNE-Python imported.')

# Configure the graphics engine
from matplotlib import pyplot as plt
plt.rc('figure', max_open_warning=100)
%matplotlib notebook
from mayavi import mlab  # Mayavi is used for 3D graphics
mlab.init_notebook('ipy')  # This instructs Mayavi to render in the background and send png graphics to the browser
print('From now on, all graphics will send to your browser.')

Overview of MNE-Python and how to view the documentation

The MNE-Python software module is subdivided in several sub-modules, all housing classes and functions related to different aspects of data analysis. Here are the sub-modules that are relevant for this exercise:

mne - Top level module, containing general purpose classes/functions as well as all sub-modules
mne.io - Functions related to loading data in different formats
mne.viz - Functions related to data visualization
mne.minimum_norm - Classes and functions related to performing minimum norm estimates (MNE)

Documentation of all functions and classes can be found at the Python API reference page. For quick access, all class/function names used in this notebook are also links to their respective documentation pages. To use MNE-Python effectively for your own projects, you need to be able to use the documentation. Therefore, I'm not going to spell out how to call each function, but instead want you to use the documentation to look up this information.

Loading data

Let's dive in by loading some data and looking at the raw signal coming out of the MEG scanner. The function to load the FIFF files that are produced by the recording software is called mne.io.read_raw_fif. Take a look at its documentation. From the function signature we can see the function has one required argument, the name of the file to load, and several optional arguments that we can leave alone for now.

The file with the raw data is 'data/sample-raw.fif'. In the cell below, write the line of code to load it using the mne.io.read_raw_fif function and store the result in a variable called raw.


In [ ]:
# Write your line of code here
raw =

If the code was correct, the cell below will visualize the raw data, using an interactive figure. Click in the scrollbars or use the arrow keys to scroll through the data.


In [ ]:
# Lots of MNE-Python objects have a .plot() method, and mne.Raw is no exception
raw.plot();  # Note the semicolon ; at the end, see the text below to find out why
What's with the semicolon ; ? The Jupyter notebook you are working in right now displays the result of the last statement in a code cell. The plotting functions return a figure object. Therefore, if the last statement of a cell is a call to a plotting function, the figure is displayed twice: once when the function is called, and once more when the figure object is displayed by the Jupyter notebook. By ending a line with a semicolon `;`, we suppress the result by starting a new empty statement.

Browsing through the channels, you will notice there are several channel types:

  • 204 MEG gradiometers (102 pairs of two)
  • 102 MEG magnetometers
  • 9 STIM channels
  • 1 EOG sensor

All these channels record different information about the volunteer and the environment.

Staring at the raw data is not very helpful. The brain is constantly doing all kinds of things, so there are a lot overlapping signals.

For this exercise, we are interested only in the signals that are related to processing the visual and auditory stimuli that were presented to the volunteer. Let's start by cutting out only the pieces of signal surrounding the times at which a stimulus was presented. Of course, that means we first have to figure out when stimuli were presented. For this, we can use the STIM channels.

The STIM channels and events

In the figure you just made, scroll down and take a look at channel STI 014. On this channel, the computer that is presenting the stimuli was sending timing information to the MEG equipment. Whenever a stimulus (checkerboard or beep) was presented, the signal at this channel jumps briefly from 0 to either 1, 2, 3 or 4, indicating the type of stimulus.

We can use this channel to create an "events" matrix: a table listing all the times a stimulus was presented, along with the time of the event and the type of stimulus. The function to do this is called mne.find_events, and creates a 2D NumPy array containing all the events along with when they occurred and a numerical code indicating the type of event.

The event array can be visualized using the mne.viz.plot_events function.

In the cell below, use the mne.find_events function to create an array called events, then visualize it using the mne.viz.plot_events function:


In [ ]:
events =

The system generated 6 types of events. We are interested in events with ids 1 to 4, which correspond to the presentation of one of the four different types of stimuli. Lets give them names. Here is a dictionary mapping string names to event ids:


In [ ]:
event_id = {
    'audio/left': 1,
    'audio/right': 2,
    'visual/left': 3,
    'visual/right': 4
}

Creating epochs

Now that we have the information on what stimulus was presented at what time, we can extract "epochs". Epochs are little snippets of signal surrounding an event. These epochs can then be averaged to produce the "evoked" signal.

To cut up the continuous data into epochs, create an mne.Epochs object. Pass the raw data, the events array and the event_id dictionary as parameters. By default, epochs will be cut starting from 0.2 seconds before the onset of the event until 0.5 seconds after the onset. These defaults are fine for the data we're currently analyzing.

A note on creating objects in Python In the Python programming language, creating objects is very similar to calling functions. To create an object of a certain class, you call the class name as a function. For example, to create an object of type `str`, you call `str()`. You can pass parameters like usual: `str('my string')`.

The created mne.Epochs object has a .plot() method (like so many of MNE-Python's objects). Try calling it to visualize the epochs. Don't forget to put a semicolon ; at the end of your plotting call or the figure will show twice.

Write some code in the cell below to create the epochs and visualize them:


In [ ]:
epochs =

The figure you just created is interactive. Try clicking in the scrollbars and using the arrow keys to explore the data. Also try pressing the b key to switch to "butterly" mode. In this mode, all the channels are plotted on top of each other. This is a great mode for quickly checking data quality: can you spot any epochs containing anomalous spikes caused by eye-blinks and movements of the volunteer? Clicking on epochs will turn them red, causing them to be dropped from further analysis after clicking the "end interaction" button that looks like this: .

Visualizing the evoked field

Now we have snippets of data that are likely contain signals that are related to the processing of the stimuli. However, there are still so many overlapping signals that it's difficult to see anything.

During the talks earlier today, you have heard about "evoked" data. By averaging all the epochs corresponding to a stimulus, signals that are consistently present every time a stimulus was presented will remain, while all other signals will more or less cancel out. The result is referred to as the "evoked" field (i.e. signals that are "evoked" by the stimulus).

Averaging epochs is simple: the mne.Epochs object has a method called average for exactly that purpose. The method doesn't need any parameters (there are some optional ones, but we can leave them alone for now) and produces a new object of type mne.Evoked. Of course, this evoked object also has a plot method you can use to plot a basic visualization of it, but also a plot_joint method that provides a much better visualization.

Another useful feature of the mne.Epochs object is that it behaves as a Python dictionary. To select all the epochs that correspond to a specific event type, you can index the object like so:

epochs['visual/left']

(it uses the string descriptions that we defined in the event_id dictionary earlier on.) Hence, to visualize the evoked field in response to a checkerboard presented in the left visual field of the volunteer, we write:

epochs['visual/left'].average().plot_joint()

In the cell belows, visualize the evoked fields for all four stimuli:


In [ ]:


In [ ]:


In [ ]:


In [ ]:

From the evoked data, we can see several bursts of activity following the presentation of a stimulus. You can also see how the different sensors (magnetometers and gradiometers) pick up the same signal. Looking at the topographic maps (the "heads"), you can already see how the visual stimuli generated activity at the back of the head, where the visual cortex is located, and how auditory stimuli generated activity on the side of the head, where the auditory cortices are located.

But let's turn it up a notch and try to estimate and visualize the cortical origins of the evoked fields.

Loading the MRI data and the concept of "subjects_dir"

As you may remember from the talks earlier today, we can use MRI data to create a detailed model of the head, which allows us to simulate the signals propagating from the cortex to the MEG sensors, which in turn allows us to estimate the cortical origins of the signal.

Processing the raw MRI data into a 3D head model takes about 24 hours and is performed by a program called FreeSurfer. We don't have that kind of time right now, so I ran the program in advance for you. FreeSurfer stores its output in a folder, which is available on this system as data/mri. Inside this folder is a subfolder for each subject. We only have one subject, called sample.

We are not going to load all of the FreeSurfer data into memory. Instead, we're going to inform MNE-Pythnon where the MRI folder ('data/mri') is located on the system, so the required data can be loaded as needed. The cell below accomplishes this:


In [ ]:
# This code sets an environment variable called SUBJECTS_DIR
import os
os.environ['SUBJECTS_DIR'] = 'data/mri'

The data/mri folders contains the MRI data for all volunteers that participated in the experiment. In our case, there is only one volunteer, named sample. All the MNE-Python functions that need access to the head model take a parameter subject, that needs to be set to 'sample' in our case. With access to both the FreeSurfer folder and the subject name, the function knows where to find the head model data it needs.

Another tedious task is to align the coordinate frames of the MRI and the MEG scanners, which requires some interactive tools that unfortunately don't work in the browser. The result of this alignment process is a coordinate transformation object, which I have also prepared for you in advance. The line of code below will loaded the transformation into memory:


In [ ]:
trans = mne.read_trans('data/sample-trans.fif')

To visualize the head model and check if the coordinate systems have been properly aligned, you can use the mne.viz.plot_alignment function. Take a look at its documentation and you will see you can pass a whole list of different objects to this function. The purpose of the function is to visualize everything you give it in the same coordinate space. If everything lines up properly, we're good to go!

The line of code below will call the function. Take note of the subject parameter. You will need to use this parameter for any function calls that need access to the head model.


In [ ]:
mne.viz.plot_alignment(epochs.info, trans, subject='sample', surfaces=['white', 'outer_skin'])

The above figure is interactive. Drag on the figure to rotate the 3D model and check that the brain and head, which are generated from the MRI images, are aligned nicely with the MEG helmet and the EEG sensors, which locations are taken from the epochs.info dictionary.

Creating a source space

We are going to estimate the cortical orgins of the signals by creating a fine grid of points along the cortex. More precisely, we're going to put these points on the boundary between the white and gray matter of the brain. Then, we will compute for each grid point, a spatial filter that attempts to isolate any signals possibly originating from that point. Taken together, the estimated activity at all the grid points give a complete picture of the estimated signals originating from all over the cortex.

To define the grid points, or the "source space" as MNE-Python calls it, we can use the mne.setup_source_space function. This function needs the subject parameter that you saw before used in the mne.viz.plot_alignment function. There is another, optional, parameter you want to set to increase the computation speed. You want to set add_dist=False to disable the very lengthy point-to-point distance computation that we don't need for this exercise. You can leave the rest of the parameters at their default values.

The mne.setup_source_space function produces an object of the type mne.SourceSpaces (plural, because it creates a separate source space for each hemisphere). This object has, you guessed it, a plot method to visualize it.

Create a source space, store it in a variable named src and visualize it:


In [ ]:
src =

The forward model

With the head model and source space in place, we can compute the forward model (or forward "solution" as MNE-Python calls it). This is a simulation of the magnetic fields originating from the grid points of the source space, propagating through the various tissues in the head model to reach the MEG sensors.

It is created by the mne.make_forward_solution function. This function has four required parameters:

  1. the epochs.info dictionary containing information about the sensors
  2. the coordinate tranformation trans that aligns the coordinates of the MRI with that of the MEG scanner.
  3. the source space src we just created
  4. the location of the physical model of the various tissues in the head. This has been computed by the FreeSurfer program and can be found at: bem='data/mri/sample/bem/bem-sol.fif'.

Go ahead and create the forward model and store it in a variable called fwd:


In [ ]:
fwd =

If all went well, the following line of code will plot the sensitivity profile: how well the MEG sensors pick up signals from each grid point of the source space. You will see that the closer a grid point is to a MEG sensor, the better we can see signals originating from it.


In [ ]:
brain = mne.sensitivity_map(fwd).plot(hemi='both', surface='white', time_label='Sensitivity map')
brain.scale_data_colormap(0, 0.5, 1, False)
brain

The inverse model

The forward model we just computed can simulate signals originating from the cortex to the MEG sensors. We want to go the other way: tracing signals measured at the MEG sensors back to their cortical origin. To "invert" the forward model, we will use the minimum-norm estimate (MNE) method. This method combines the covariance between the sensors with the forward model to construct a suitable inverse model.

We can use the mne.compute_covariance function to compute the covariance between the sensors. For the MNE method, we need to compute this using only the data just before the stimulus was presented. If we regard the time at which the stimulus was presented as 0, we want all the negative time points. It is also a good idea to apply some shrinkage to the covariance matrix, which will make the solution better behaved.

Call the mne.compute_covariance, give it the epochs, set tmax=0 and set method='shrunk'. Store the result in a variable called cov:


In [ ]:
cov =

If all went well, the following line will plot some information about the covariance matrix you just computed:


In [ ]:
mne.viz.plot_cov(cov, epochs.info);

Now we can compute the actual inverse model, or inverse "operator" as MNE-Python calls it. This is done with the mne.minimum_norm.make_inverse_operator function. It has as required parameters the epochs.info dictionary, the forward model fwd and the covariance cov we just computed. Store the result in a variable called inv:


In [ ]:
inv =

Applying the inverse model to produce source estimates

The inverse model can be used to estimate, for any MEG signal in the recording, the cortical origins. Let's estimate the cortical origins of the evoked fields we computed earlier.

To apply the inverse model to an mne.Evoked object, use the mne.minimum_norm.apply_inverse function. It takes two required parameters: the mne.Evoked object object and the inverse solution inv. You can leave all the optional parameters at their default settings.

Go back to the visualizations of the evoked fields you made earlier if you need a refresher on how to create an mne.Evoked object for each of the four stimuli. Then, you can use the mne.minimum_norm.apply_inverse function to transform all four mne.Evoked objects into four mne.SourceEstimate objects. These objects hold for each of the grid points in the source space, a time course of the estimated current at that point.

mne.SourceEstimate objects have a plot method you can use to visualize the estimated currents on a 3D model of the brain at a certain time. To get a satisfying plot, it's best to tweak some of the default parameters:

  • A good time to visualize is 0.08 seconds after the onset of the stimulus, which is the height of the first burst of activity (see the evoked field visualizations). You can do this be specifying initial_time=0.08
  • Set hemi='both' so both hemispheres are drawn (be default, only the left hemisphere is shown)
  • Set views=['lateral', 'medial', 'caudal'] to draw the brain from multiple angles at the same time.
  • By default, an "inflated" version of the brain is drawn. This is great for peering inside the sulcii. You can also try specifying surface='white' to switch the view to the white/gray matter boundary.

Now, go ahead and visualize some brain activity!

  • Can you trace the responses to checkerboards to the visual cortex?
  • Can you trace the responses to auditory beeps to the auditory cortex?
  • Can you spot the difference between presenting a checkerboard to the left/right visual field?
  • Can you spot the difference between presenting a beep to the left/right side of the head?

In [ ]:


In [ ]:


In [ ]:


In [ ]:

The End

Congratulations! You have completed all levels.

Where to go from here?

To learn more about analyzing MEG data with MNE-Python, there are some excellent tutorial papers:

A Reproducible MEG/EEG Group Study With the MNE Software: Recommendations, Quality Assessments, and Good Practices
Mainak Jas , Eric Larson , Denis A. Engemann , Jaakko Leppäkangas, Samu Taulu , Matti Hämäläinen and Alexandre Gramfort

Analysis of Functional Connectivity and Oscillatory Power Using DICS: From Raw MEG Data to Group-Level Statistics in Python
Marijn van Vliet , Mia Liljeström , Susanna Aro , Riitta Salmelin and Jan Kujala

Mainak Jas also has a wonderful set of Jupyter notebooks:
https://jasmainak.github.io/mne-workshop-brown

And of course there is the extensive documentation of MNE-Python itself, including many tutorials and even more quick examples:
https://mne.tools/stable/documentation.html

Finally, when you are getting more comfortable with doing analysis, and want to know how to start organizing you code, check out this paper:

Guidelines for data analysis scripts
Marijn van Vliet