In [ ]:
%matplotlib inline
In [ ]:
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
trans_fname = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
raw = mne.io.read_raw_fif(raw_fname)
trans = mne.read_trans(trans_fname)
src = mne.read_source_spaces(op.join(subjects_dir, 'sample', 'bem',
'sample-oct-6-src.fif'))
For M/EEG source imaging, there are three coordinate frames (further
explained in the next section) that we must bring into alignment using two 3D
transformation matrices <rotation and translation matrix_>
_
that define how to rotate and translate points in one coordinate frame
to their equivalent locations in another.
:func:mne.viz.plot_alignment
is a very useful function for inspecting
these transformations, and the resulting alignment of EEG sensors, MEG
sensors, brain sources, and conductor models. If the subjects_dir
and
subject
parameters are provided, the function automatically looks for the
Freesurfer MRI surfaces to show from the subject's folder.
We can use the show_axes
argument to see the various coordinate frames
given our transformation matrices. These are shown by axis arrows for each
coordinate frame:
i.e., a RAS coordinate system in each case. We can also set
the coord_frame
argument to choose which coordinate
frame the camera should initially be aligned with.
Let's take a look:
In [ ]:
fig = mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
subjects_dir=subjects_dir, surfaces='head-dense',
show_axes=True, dig=True, eeg=[], meg='sensors',
coord_frame='meg')
mne.viz.set_3d_view(fig, 45, 90, distance=0.6, focalpoint=(0., 0., 0.))
print('Distance from head origin to MEG origin: %0.1f mm'
% (1000 * np.linalg.norm(raw.info['dev_head_t']['trans'][:3, 3])))
print('Distance from head origin to MRI origin: %0.1f mm'
% (1000 * np.linalg.norm(trans['trans'][:3, 3])))
dists = mne.dig_mri_distances(raw.info, trans, 'sample',
subjects_dir=subjects_dir)
print('Distance from %s digitized points to head surface: %0.1f mm'
% (len(dists), 1000 * np.mean(dists)))
.. raw:: html
.. role:: pink .. role:: blue .. role:: gray .. role:: magenta .. role:: purple .. role:: green .. role:: red
Neuromag/Elekta/MEGIN head coordinate frame ("head", :pink:pink axes
)
The head coordinate frame is defined through the coordinates of
anatomical landmarks on the subject's head: Usually the Nasion (NAS
),
and the left and right preauricular points (LPA
and RPA
).
Different MEG manufacturers may have different definitions of the
coordinate head frame. A good overview can be seen in the
FieldTrip FAQ on coordinate systems
.
For Neuromag/Elekta/MEGIN, the head coordinate frame is defined by the intersection of
red sphere
) and RPA
(:purple:purple sphere
), andthe line perpendicular to this LPA-RPA line one that goes through
the Nasion (:green:green sphere
).
The axes are oriented as X origin→RPA, Y origin→NAS, Z origin→upward (orthogonal to X and Y).
.. note:: The required 3D coordinates for defining the head coordinate
frame (NAS, LPA, RPA) are measured at a stage separate from
the MEG data recording. There exist numerous devices to
perform such measurements, usually called "digitizers". For
example, see the devices by the company `Polhemus`_.
MEG device coordinate frame ("meg", :blue:blue axes
)
The MEG device coordinate frame is defined by the respective MEG
manufacturers. All MEG data is acquired with respect to this coordinate
frame. To account for the anatomy and position of the subject's head, we
use so-called head position indicator (HPI) coils. The HPI coils are
placed at known locations on the scalp of the subject and emit
high-frequency magnetic fields used to coregister the head coordinate
frame with the device coordinate frame.
From the Neuromag/Elekta/MEGIN user manual:
The origin of the device coordinate system is located at the center
of the posterior spherical section of the helmet with X axis going
from left to right and Y axis pointing front. The Z axis is, again
normal to the plane with positive direction up.
.. note:: The HPI coils are shown as :magenta:magenta spheres
.
Coregistration happens at the beginning of the recording and
the data is stored in ``raw.info['dev_head_t']``.
MRI coordinate frame ("mri", :gray:gray axes
)
Defined by Freesurfer, the MRI (surface RAS) origin is at the
center of a 256×256×256 1mm anisotropic volume (may not be in the center
of the head).
.. note:: We typically align the MRI coordinate frame to the head
coordinate frame through a `rotation and translation matrix`_,
that we refer to in MNE as ``trans``.
Let's try using trans=None
, which (incorrectly!) equates the MRI
and head coordinate frames.
In [ ]:
mne.viz.plot_alignment(raw.info, trans=None, subject='sample', src=src,
subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
In [ ]:
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
src=src, subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
trans
using the GUIYou can try creating the head↔MRI transform yourself using
:func:mne.gui.coregistration
.
Head Shape Source
). The MRI data is already loaded if you provide the
subject
and subjects_dir
. Toggle Always Show Head Points
to see
the digitization points.Edit
radio button in MRI Fiducials
.After doing this for all the landmarks, toggle Lock
radio button. You
can omit outlier points, so that they don't interfere with the finetuning.
.. note:: You can save the fiducials to a file and pass
``mri_fiducials=True`` to plot them in
:func:`mne.viz.plot_alignment`. The fiducials are saved to the
subject's bem folder by default.
Fit Head Shape
. This will align the digitization points to the
head surface. Sometimes the fitting algorithm doesn't find the correct
alignment immediately. You can try first fitting using LPA/RPA or fiducials
and then align according to the digitization. You can also finetune
manually with the controls on the right side of the panel.Save As...
(lower right corner of the panel), set the filename
and read it with :func:mne.read_trans
.For more information, see step by step instructions
in these slides
<https://www.slideshare.net/mne-python/mnepython-coregistration>
_.
Uncomment the following line to align the data yourself.
In [ ]:
# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)
The surface alignments above are possible if you have the surfaces available
from Freesurfer. :func:mne.viz.plot_alignment
automatically searches for
the correct surfaces from the provided subjects_dir
. Another option is
to use a spherical conductor model <eeg_sphere_model>
. It is
passed through bem
parameter.
In [ ]:
sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto')
src = mne.setup_volume_source_space(sphere=sphere, pos=10.)
mne.viz.plot_alignment(
raw.info, eeg='projected', bem=sphere, src=src, dig=True,
surfaces=['brain', 'outer_skin'], coord_frame='meg', show_axes=True)
It is also possible to use :func:mne.gui.coregistration
to warp a subject (usually fsaverage
) to subject digitization data, see
these slides
<https://www.slideshare.net/mne-python/mnepython-scale-mri>
_.