import statements find the necessary or useful modules (Python file with some functions or variables in it), load and initialize them if necessary and
define alias(es) in the local namespace for the scope where the statement occurs. Through the import system Python code in one module gains access to the code in another module.
In [ ]:
import mnefun
from score import score
import numpy as np
try:
# Use niprov as handler for events if it's installed
from niprov.mnefunsupport import handler
except ImportError:
handler = None
Niprov is a python program that uses meta-data to create, store and publish provenance for brain imaging files.
We begin by defining the processing parameters relavent to the study using the mnefun.params class object. We gain access to the variables in params using the (dot) operator.
Note shift+tab invokes module documentation in the notebook.
In [ ]:
params = mnefun.Params(tmin=-0.2, tmax=0.5, t_adjust=-4e-3,
n_jobs=6, bmin=-0.2, bmax=None,
decim=5, proj_sfreq=200, filter_length='5s')
The above statement defines a variable params that is bound to mnefun as class object, inheriting all the attributes and methods associated with that class. To see the attributes of an object in Python you can do...
In [ ]:
dir(params)
For now params is initialized with variable arguments from above along with default arguments for all other variables defined relavent to MEG data preprocessing.
tmin & tmax define epoching intervalt_adjust adjusts for delays in the event trigger in units msn_jobs defines number of CPU jobs to use during parallel operationsbmin & bmax define baseline interval such that (-0.2, None) translates to DC offset correction for the baseline interval during averagingdecim actors to downsample the data after filtering when epoching datafilter_length Filter length to use in FIR filteringproj_sfreq The sample freq to use for calculating projectors. Useful since
time points are not independent following low-pass. Also saves
computation to downsampleNote To use NVIDIA parallel computing platform (CUDA) use params.n_jobs_fir='CUDA' and params.n_jobs_resample='CUDA' Requires working CUDA development applications and other dependencies. See mne-python installation instructions
for further information.
Otherwise set n_jobs_xxx > 1 to speed up resampling and filtering operations by multi-core parallel processing.
Next we define list variables that determine...
subjects list of subject identifiersstructurals list identifers pointing to FreeSurfer subject directory containing MRI data. Here None means missing MRI data, thus inversion operation is done using spherical head model with best-fit sphere aligned with subject's head shapedates list of None or arbitrary date values as tuple type used for anonymizing subject's dataAll list variables in params have a one-to-one correspondence and are used for indexing purposes, thus
assertion statements are used to check e.g. list lengths are equal.
In [ ]:
params.subjects = ['subj_01', 'subj_02']
params.structurals = [None, 'AKCLEE_110_slim'] # None means use sphere
params.dates = [(2014, 2, 14), None] # Use "None" to more fully anonymize
In [ ]:
params.subject_indices = [0] # Define which subjects to run
params.plot_drop_logs = True # Turn off so plots do not halt processing
params.on_process = handler # Set the niprov handler to deal with events:
In [ ]:
params.acq_ssh = 'kambiz@minea.ilabs.uw.edu' # Should also be "you@minea.ilabs.uw.edu"
# Pass list of paths to search and fetch raw data
params.acq_dir = ['/sinuhe_data01/eric_non_space',
'/data101/eric_non_space',
'/sinuhe/data01/eric_non_space',
'/sinuhe/data02/eric_non_space',
'/sinuhe/data03/eric_non_space']
# Set parameters for remotely connecting to SSS workstation ('sws')
params.sws_ssh = 'kam@kasga.ilabs.uw.edu' # Should also be "you@kasga.ilabs.uw.edu"
params.sws_dir = '/data07/kam/sandbox'
Next we define:
run_names _tring identifier used in naming acquisition runs e.g., '%s_funloc' means {str_funloc} where str prefix is the subject ID_get_projs_from number of acquisition runs to use to build SSP projections for filtered datainv_names prefix string to append to inverse operator file(s)inv_runs number of acquisition runs to use to build inverse operator for filtered datacov_method covariance calculation methodruns_empty name format of empty room recordings if any
In [ ]:
params.run_names = ['%s_funloc']
params.get_projs_from = np.arange(1)
params.inv_names = ['%s']
params.inv_runs = [np.arange(1)]
params.cov_method = 'shrunk' # Cleaner noise covariance regularization
params.runs_empty = ['%s_erm']
Use reject and flat dictionaries to pass noisy channel criteria to mne.Epochs during the epoching procedure. The noisy channel criteria are used to reject trials in which any gradiometer, magnetometer, or eeg channel exceeds the given criterion for that channel type, or is flat during the epoching interval.
In [ ]:
params.reject = dict(grad=3500e-13, mag=4000e-15)
params.flat = dict(grad=1e-13, mag=1e-15)
Here we define number of SSP projectors as a list of lists. The individual lists are used to define PCA projections computed for the electric signature from the heart and eyes, and also the ERM noise. Each projections list is a 1-by-3 row vector with columns corresponding to the number of PCA components for Grad/Mag/EEG channel types.
In [ ]:
params.proj_nums = [[1, 1, 0], # ECG
[1, 1, 2], # EOG
[0, 0, 0]] # Continuous (from ERM)
Next we set up for the SSS filtering method to use either Maxfilter or MNE. Regardless of the argument, in MNEFUN we use default Maxfilter parameter values for SSS. Users should consult the Maxfilter manual or see mne.preprocessing.maxwell_filter for more information on argument values; with the minimal invoke below the default Maxfilter arguments for SSS & tSSS, along with movement compensation is executed.
In [ ]:
params.sss_type = 'python'
Recommended SSS denoising arguments for data from children:
sss_regularize = 'svd' # SSS basis regularization typetsss_dur = 4. # Buffer duration (in seconds) for spatiotemporal SSS/tSSSint_order = 6 # Order of internal component of spherical expansionst_correlation = .9 # Correlation limit between inner and outer SSS subspacestrans_to = (0, 0, .03) # The destination location for the head
In [ ]:
params.score = score # Scoring function used to slice data into trials
# The scoring function needs to produce an event file with these values
params.in_numbers = [10, 11, 20, 21]
# Those values correspond to real categories as:
params.in_names = ['Auditory/Standard', 'Visual/Standard',
'Auditory/Deviant', 'Visual/Deviant']
If a scoring function i.e., score.py file exists then it must be imported and bound to params.score in order to handle trigger events in the .fif file as desired. The scoring function is used to extract trials from the filtered data. Typically the scoring function uses mne.find_events or mnefun.extract_expyfun_events to find events on the trigger line(s) in the raw .fif file.
In [ ]:
# -*- coding: utf-8 -*-
# Copyright (c) 2014, LABS^N
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
"""
----------------
Score experiment
----------------
This sample scoring script shows how to convert the serial binary stamping
from expyfun into meaningful event numbers using mnefun, and then write
out the data to the location mnefun expects.
"""
from __future__ import print_function
import os
import numpy as np
from os import path as op
import mne
from mnefun import extract_expyfun_events
# Original coding used 8XX8 to code event types, here we translate to
# a nicer decimal scheme
_expyfun_dict = {
10: 10, # 8448 (9) + 1 = 10: auditory std, recode as 10
12: 11, # 8488 (11) + 1 = 12: visual std, recode as 11
14: 20, # 8848 (13) + 1 = 14: auditory dev, recode as 20
16: 21, # 8888 (15) + 1 = 16: visual dev, recode as 21
}
def score(p, subjects):
"""Scoring function"""
for subj in subjects:
print(' Running subject %s... ' % subj, end='')
# Figure out what our filenames should be
out_dir = op.join(p.work_dir, subj, p.list_dir)
if not op.isdir(out_dir):
os.mkdir(out_dir)
for run_name in p.run_names:
fname = op.join(p.work_dir, subj, p.raw_dir,
(run_name % subj) + p.raw_fif_tag)
events, presses = extract_expyfun_events(fname)[:2]
for ii in range(len(events)):
events[ii, 2] = _expyfun_dict[events[ii, 2]]
fname_out = op.join(out_dir,
'ALL_' + (run_name % subj) + '-eve.lst')
mne.write_events(fname_out, events)
# get subject performance
devs = (events[:, 2] % 2 == 1)
has_presses = np.array([len(pr) > 0 for pr in presses], bool)
n_devs = np.sum(devs)
hits = np.sum(has_presses[devs])
fas = np.sum(has_presses[~devs])
misses = n_devs - hits
crs = (len(devs) - n_devs) - fas
print('HMFC: %s, %s, %s, %s' % (hits, misses, fas, crs))
In [ ]:
# Define how to translate the above event types into evoked files
params.analyses = [
'All',
'AV',
]
params.out_names = [
['All'],
params.in_names,
]
params.out_numbers = [
[1, 1, 1, 1], # Combine all trials
params.in_numbers, # Leave events split the same way they were scored
]
params.must_match = [
[],
[0, 1], # Only ensure the standard event counts match
]
In [ ]:
mnefun.do_processing(
params,
fetch_raw=True, # Fetch raw recording files from acquisition machine
do_score=False, # Do scoring to slice data into trials
push_raw=False, # Push raw files and SSS script to SSS workstation
do_sss=False, # Run SSS remotely (on sws) or locally with mne-python
fetch_sss=False, # Fetch SSSed files from SSS workstation
do_ch_fix=False, # Fix channel ordering
gen_ssp=False, # Generate SSP vectors
apply_ssp=False, # Apply SSP vectors and filtering
plot_psd=False, # Plot raw data power spectra
write_epochs=False, # Write epochs to disk
gen_covs=False, # Generate covariances
gen_fwd=False, # Generate forward solutions (and src space if needed)
gen_inv=False, # Generate inverses
gen_report=False, # Write mne report html of results to disk
print_status=True, # Print completeness status update
)
In [ ]: