Here we have run an HCP dataset through DSI Studio using the recommended parameters from the documentation. This includes
The images (dwi mask, dwi data and graddev files) were downloaded directly from connectomedb.org. The resulting fib file is in 101915/data/input.
Here a MITTENS
object is created using default parameters.
In [1]:
from mittens import MITTENS
mitn = MITTENS(fibgz_file="101915/data/input/101915.src.gz.odf8.f5rec.fy.gqi.1.25.fib.gz")
In [2]:
import os
#os.makedirs("101915/data/mittens_output")
mitn.calculate_transition_probabilities(output_prefix="HCP")
This writes out NIfTI images for the transition probabilities to each neighbor using both the singleODF and doubleODF methods. Additionally, you will find volumes written out for CoDI and CoAsy. Writing to NIfTI is useful to quickly assess the quality of the output. the CoDI volumes should look a lot like GFA or FA images.
One can load theses images directly to create a MITTENS
object. This bypasses the need to re-calculate transition probabilities and is the recommended way to access transition probabilities before building and saving Voxel Graphs.
Here we demonstrate loading transition probabilites from NIfTI files. We verify that their contents are identical to those generated by calculating transition probabilities from a fib.gz
file.
In [4]:
nifti_mitn = MITTENS(nifti_prefix="HCP")
import numpy as np
print("singleODF outputs are identical:",
np.allclose(mitn.singleODF_results,nifti_mitn.singleODF_results))
print("doubleODF outputs are identical:",
np.allclose(mitn.doubleODF_results,nifti_mitn.doubleODF_results,equal_nan=True))
You'll notice loading from NIfTI is much faster!
Transition probabilities need to be converted to edge weights somehow. The MITTENS
object accomplishes this through the build_graph
function, which offers a number of options. Here is the relevant portion of the build_graph
documentation
Schemes for shortest paths:
---------------------------
``"negative_log_p"``:
Transition probabilities are log transformed and made negative. This is similar to the Zalesky 2009 strategy.
``"minus_iso_negative_log"``:
Isotropic probabilities are subtracted from transition probabilities. Edges are not added when transition probabilities are less than the isotropic probability.
``"minus_iso_scaled_negative_log"``:
Same as ``"minus_iso_negative_log"`` except probabilities are re-scaled to sum to 1 *before* the log transform is applied.
You also have to pick whether to use singleODF or doubleODF probabilities. You can easily create graphs using either.
In [ ]:
doubleODF_nlp = mitn.build_graph(doubleODF=True, weighting_scheme="negative_log_p")
doubleODF_nlp.save("fcup_doubleODF_nlp.mat")
If no spatial mask is specified when the MITTENS
object is constructed, the generous brain mask estimated by DSI Studio will be used. This means the file sizes will be somewhat larger and the calculations will take longer. However, we recommend using the larger mask, as you can easily apply masks to the voxel graphs. It's better to have a voxel and not need it than to re-run transition probability calculation.
MITTENS
mimics DSI Studio. Internally everything is done in LPS+ orientation and results are written out in RAS+ orientation. The affine used in the transition probability NIfTI images is the same as the images written by DSI Studio. This is a bizarre area of scanner coordinates and more closely matches the coordinates used by streamlines.
Here we load a Voxel Graph directly from a matfile and check that it exactly matches the graph produced above.
In [ ]:
from mittens import VoxelGraph
mat_vox_graph = VoxelGraph("101915/data/mittens_output/HCP_doubleODF_nlp.mat")
from networkit.algebraic import adjacencyMatrix
matfile_adj = adjacencyMatrix(mat_vox_graph.graph,matrixType="sparse")
mitn_adj = adjacencyMatrix(doubleODF_nlp.graph,matrixType="sparse")
mat_indices = np.nonzero(matfile_adj)
indices = np.nonzero(mitn_adj)
print("row indices match:", np.all(mat_indices[0] == indices[0]))
print("column indices match:",np.all(mat_indices[1] == indices[1]))
print("Weights all match:", np.allclose(matfile_adj[indices],mitn_adj[indices]))