Welcome to Neuroimaging in Python

If you are running this from the SciNet jupyter hub, then use the paths in the top half of the block below.

If you are running this on a local laptop, edit the bottom of the first block with the paths to your desired direcoties..


In [ ]:
# We are going to use a single subject from the ABIDE dataset.
import os

# # ## use these commands if you are running this on scinet
my_home = os.environ.get('HOME')
my_scratch = my_home.replace('/home/', '/scratch/')
tutorial_dir = os.path.join(my_scratch, 'ss2017_mripython')
my_nilearn_data = os.path.join(tutorial_dir, 'data','nilearn_data')
cc200_nii = os.path.join(os.path.dirname(os.getcwd()), 'rois', 'cc200_roi_atlas.nii.gz')

## if this directory doesn't exist - make it..
rois_dir = os.path.join(tutorial_dir, 'data', 'rois')
if not os.path.exists(rois_dir):
    os.makedirs(rois_dir)

# ## edit this section if you are running this one your own laptop
# tutorial_dir ='/home/edickie/Documents/ss2017_16pythonmri/'  ## point this to the location where you want the data
# my_nilearn_data = os.path.join(tutorial_dir, 'data', 'nilearn_data') 

# ## point this to the location of the craddock atlas you downloaded
# cc200_nii = os.path.join(tutorial_dir, 'data', 'rois', 'cc200_roi_atlas.nii.gz')

In [ ]:
os.path.dirname(os.getcwd())

In [ ]:
# this makes the plots show up..
import matplotlib.pyplot as plt
%matplotlib inline

Using nilearn data fetchers to grab our tutorial dataset

We are grabbing data from the ABIDE dataset. A data sharing initiative with over 1000 scans. Check out the references included in the abide object for more info.

Thanks to the work of the proprocessed connectome project. A lot of the data processing work has been done for us! We are going to grab the preprocessed but unfilted resting stat MR data.


In [ ]:
from nilearn import datasets

# We specify the site and number of subjects we want to download
abide = datasets.fetch_abide_pcp(data_dir=my_nilearn_data,
                                 derivatives=['func_mean', 'func_preproc', 'func_mask'],
                                 quality_checked= True, 
                                 n_subjects = 4)

print(abide.description)

In [ ]:
print('{}'.format(abide.description))

In [ ]:
abide.func_preproc

Here we will define the files we want to work with for the rest of the tutorial

We need:

+ some Regions of Interest (rois) to use as masks for our analysis
+ a functional MR image
+ (for plotting) The mean of our functional MR image
+ (for masking) a mask defining the brain vs outside of head
+ to decide which roi we will focus on

In [ ]:
rois = cc200_nii
func = abide.func_preproc[3]
mask = abide.func_mask[3]
mean_func = abide.func_mean[3]
roi = 174

In [ ]:
from nilearn import plotting

plotting.plot_roi(cc200_nii)

In [ ]:
plotting.plot_roi(cc200_nii, bg_img=mean_func, alpha = 0.5, title = "cc200 atlas")

Using nibabel to load nifti images


In [ ]:
# this is for interacting with NIFTI files
import nibabel as nib

func_nib = nib.load(func)
print(func_nib)

In [ ]:
affine = func_nib.get_affine()
affine

In [ ]:
affine = func_nib.affine
affine

In [ ]:
header = func_nib.get_header()
print(header.keys())

In [ ]:
# get the dimensions for the fMRI file
dims = func_nib.shape
dims

In [ ]:
# use get data to extract the data from it
func_data = func_nib.get_data()
func_data.shape

In [ ]:
# reshape to voxels * timepoints (4D --> 2D)
func_data = func_data.reshape(dims[0]*dims[1]*dims[2], dims[3])

Checkling that our functional and mask files have the same dimensions...


In [ ]:
# now do the same thing for rois
rois_data = nib.load(cc200_nii).get_data()

print(rois_data.shape)

if rois_data.shape[0:2] == dims[0:2]:
    print("ROIs and func file dimensions match, Hooray!!")
else:
    print("FAIL, they do not match")

Using nilearn to sample the rois nifti file so that they match in dimensions


In [ ]:
from nilearn import image


    
resampled_cc200 = image.resample_to_img(cc200_nii, mean_func, interpolation = 'nearest')

plotting.plot_roi(resampled_cc200, mean_func)

resampled_cc200.to_filename(os.path.join(rois_dir, "resample_cc200.nii.gz"))

In [ ]:
# now do the same thing for rois
new_rois = os.path.join(tutorial_dir, 'data', 'rois', "resample_cc200.nii.gz")
rois_data = nib.load(new_rois).get_data()

print(rois_data.shape)

if rois_data.shape[0:2] == dims[0:2]:
    print("ROIs and func file dimensions match, Hooray!!")
else:
    print("FAIL, they do not match")

Calculating our own seed correlation map using numpy


In [ ]:
rois_data = rois_data.reshape(dims[0]*dims[1]*dims[2], 1)

Calculating our mean timeseries


In [ ]:
import numpy as np

# get the seed time series
idx = np.where(rois_data == roi)[0]
ts = np.mean(func_data[idx, :], axis=0)

# look at the timeseries
plt.plot(ts)
plt.show()
len(ts) # looks like it's the right length (i.e., our axis is correct)

Correlate the mean timeseries with the timeseries from every voxel in the func file


In [ ]:
# make an output matrix
output = np.zeros(dims[0]*dims[1]*dims[2])

# correlate seed against all voxels
for i in range(dims[0]*dims[1]*dims[2]):
    output[i] = np.corrcoef(ts, func_data[i, :])[0][1]

When will do a faster and with no "true divide" warning if we incorporate the brain mask


In [ ]:
## add a mask to get rid of the weird bits
mask_data = nib.load(mask).get_data()
mask_data = mask_data.reshape(dims[0]*dims[1]*dims[2], 1)
idx_mask = np.where(mask_data > 0)[0]

## recalculate the ts taking the mask into account
idx_masked = np.intersect1d(idx,idx_mask)
ts = np.mean(func_data[idx_masked, :], axis=0)

# make an output matrix
output = np.zeros(dims[0]*dims[1]*dims[2])

# correlate seed against all voxels
for i in np.arange(len(idx_mask)):
    output[idx_mask[i]] = np.corrcoef(ts, func_data[idx_mask[i], :])[0][1]

Using nibabel to write our result out to a nifti file


In [ ]:
np.unique(rois_data)

In [ ]:
# get back to 4D
output_filename = os.path.join(tutorial_dir, 'seed_correlation.nii.gz')
output = np.reshape(output, (dims[0], dims[1], dims[2], 1))

# write the results into a NIFTI file
output = nib.nifti1.Nifti1Image(output, affine)
output.to_filename(output_filename)

nilearn's glass brain and statmap plots help us visualize our new result


In [ ]:
plotting.plot_glass_brain(output_filename, threshold = 0.5, 
                          colorbar = True)

In [ ]:
plotting.plot_stat_map(output_filename, threshold = 0.3, 
                          colorbar = True)

instead of calculating one mean timeseries, let's do it for the whole atlas


In [ ]:
# create output matrix
output_ts = np.zeros((len(np.unique(rois_data))-1, dims[3]))

# rois_data_masked = np.multiply(rois_data,mask_data)
# load in all mean time series from atlas
for i, roi in enumerate(np.unique(rois_data)):
    if roi > 0:
        idx = np.where(rois_data == roi)[0]
        ts = np.mean(func_data[idx, :], axis=0)
        output_ts[i-1, :] = ts

# correlation matrix
cc200_correl = np.corrcoef(output_ts)

# view output
plt.imshow(cc200_correl, interpolation='nearest', cmap=plt.cm.RdBu_r)
plt.colorbar()

The seaborn package has some really awesome option..uncluding and easy clustered heatmap plot


In [ ]:
import seaborn

seaborn.clustermap(cc200_correl)

Getting fancier with nilearn

nilearn was built by people doing machine learning of resting state fMRI data using sci-kit learn.

It has A LOT of functionallity beyond what we have time to teach here. Including:

  • image manipulation (smoothing, resampling)
  • algorithms (ICA and dictionary learning) for parcellating your own dataset (i.e. defining your own ROIS)
  • timeseries extraction with many many options for filtering, detrending
  • multiple methods for covariance esimtaiton (i.e. partial correlation, covariance, lasso..)
  • machine learning and cross validation

They also did a beautiful job of building a large set of ipython notebook tutorials to teach you about all these methods.

Go to the the nilean website for more info.

Let's use another example to see some of the more complex things nilearn can do...

We'll start by using the data fetcher to grab a different atlas. This one (dosenbach_2010) if a set of coordinates in the brain to sample from..


In [ ]:
from nilearn import datasets

dosenbach = datasets.fetch_coords_dosenbach_2010()
print(dosenbach.description)

Using nilearn's sphere masker to extract the timeseries

nilearn has a built in function for extracting timeseries from functional files and doing a little extra signal processing at the same time!


In [ ]:
from nilearn import input_data

spheres_masker = input_data.NiftiSpheresMasker(
    seeds=dosenbach.rois, #the seeds are the dosenbach roi atlas
    smoothing_fwhm=4, radius=4.5, # set the radius of a sphere around the roi you want extracted
    standardize=True, # the time-series are centered and normed (mean 0, variance 1 in the time dimension)
    detrend=True, low_pass=0.1, high_pass=0.01, t_r=2.5) # additional signal cleaning and filtering params

timeseries = spheres_masker.fit_transform(func)

print("the shape of the timeseries is {}".format(timeseries.shape))

Using nilearn's ConnectivityMeasure to calculate our correlation matrix

Avalable options are “correlation”, “partial correlation”, “tangent”, “covariance”, “precision” or other utilites in sci-py could be plugged in (see here for an example)

Let's do partial correlation this time


In [ ]:
from nilearn.connectome import ConnectivityMeasure
from nilearn import plotting

correlation_measure = ConnectivityMeasure(kind='covariance')
dosenbach_matrix = correlation_measure.fit_transform([timeseries])[0]

In [ ]:
seaborn.clustermap(dosenbach_matrix)

In [ ]:
from nilearn import plotting

coords = np.vstack((
    dosenbach.rois['x'],
    dosenbach.rois['y'],
    dosenbach.rois['z'],
)).T

plotting.plot_connectome(dosenbach_matrix, node_coords = coords, edge_threshold='98%')

In [ ]: