PROMISE12 prostate segmentation demo

Preparation:

1) Make sure you have set up the PROMISE12 data set. If not, download it from https://promise12.grand-challenge.org/ (registration required) and run data/PROMISE12/setup.py

2) Make sure you are in NiftyNet root, setting niftynet_path correctly to the path with the niftynet folder in it


In [ ]:
import os,sys 
niftynet_path=r'path/to/NiftyNet'
os.chdir(niftynet_path)

3) Make sure you have all the dependencies installed (replacing gpu with cpu for cpu-only mode):


In [ ]:
import pip
#pip.main(['install','-r','requirements-gpu.txt'])
pip.main(['install','-r','requirements-cpu.txt'])
pip.main(['install', 'SimpleITK>=1.0.0'])

Training a network from the command line

The simplest way to use NiftyNet is via the commandline net_segment.py script. Normally, this is done on the command line with a command like this from the NiftyNet root directory:

python net_segment.py train --conf demos/PROMISE12/promise12_demo_train_config.ini --max_iter 10

Notice that we use configuration file that is specific to this experiment. This file contains default settings. Also note that we can override these settings on the command line.

To execute NiftyNet from within the notebook, you can run the following python code:


In [ ]:
import os
import sys
import niftynet
sys.argv=['','train','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_train_config.ini'),'--max_iter','10']
niftynet.main()

Now you have trained (a few iterations of) a deep learning network for medical image segmentation. If you have some time on your hands, you can finish training the network (by leaving off the max_iter argument) and try it out, by running the following command

python net_segment.py inference --conf demos/PROMISE12/promise12_demo_inference_config.ini

or the following python code in the Notebook


In [ ]:
import os
import sys
import niftynet
sys.argv=['', 'inference','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_inference_config.ini')]
niftynet.main()

Otherwise, you can load up some pre-trained weights for the network:

python net_segment.py inference --conf demo/PROMISE12/promise12_demo_config.ini --model_dir demo/PROMISE12/pretrained or the following python code in the Notebook


In [ ]:
import os
import sys
import niftynet
sys.argv=['', 'inference','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_inference_config.ini'), '--model_dir', os.path.join('demos','PROMISE12','pretrained')]
niftynet.main()

You can find your segmented images in output/promise12_demo

NiftyNet has taken care of a lot of details behind the scenes:

  1. Organizing data into a dataset of images and segmentation labels
  2. Building a deep leaning network (in this case, it is based on VNet by Milletari et al.)
  3. Added deep learning infrastruture, such as a loss function for segmentation, the ADAM optimizer.
  4. Added augmentation, where the images are zoomed and rotated a little bit for every training step so that you do not over-fit the data
  5. Run the training algorithm

All of this was controlled by the configuration file.

The configuration file

Let's take a closer look at the configuration file. Further details about the configuration settings are available in config/readme.md

[promise12] path_to_search = data/PROMISE12/TrainingData_Part1,data/PROMISE12/TrainingData_Part2,data/PROMISE12/TrainingData_Part3 filename_contains = Case,mhd filename_not_contains = Case2,segmentation spatial_window_size = (64, 64, 64) interp_order = 3 axcodes=(A, R, S) [label] path_to_search = data/PROMISE12/TrainingData_Part1,data/PROMISE12/TrainingData_Part2,data/PROMISE12/TrainingData_Part3 filename_contains = Case,_segmentation,mhd filename_not_contains = Case2 spatial_window_size = (64, 64, 64) interp_order = 3 axcodes=(A, R, S)

These lines define how NiftyNet organizes your data. In this case, in the ./data/PROMISE12 folder there is one T2-weighted MR image named 'Case??_T2.nii.gz' and one reference segmentation named 'Case??_segmentation.nii.gz' per patient. The images for each patient are automatically grouped because they share the same prefix 'Case??'. For training, we exclude patients Case20-Case26, and for inference, we only include patients Case20-Case26, so that our training and inference data are mutually exclusive.

[SYSTEM] cuda_devices = "" num_threads = 2 num_gpus = 1 model_dir = ./promise12_model

These lines are setting up some system parameters: which GPUs to use (in this case whatever is available), where to save the trained network parameters, and how many threads to use for queuing them up.

The following lines specify network properties.

[NETWORK] name = dense_vnet activation_function = prelu batch_size = 1 # volume level preprocessing volume_padding_size = 0 # histogram normalisation histogram_ref_file = standardisation_models.txt norm_type = percentile cutoff = (0.01, 0.99) normalisation = True whitening = True normalise_foreground_only=True foreground_type = otsu_plus multimod_foreground_type = and window_sampling = resize #how many images to queue up in advance so that the GPU isn't waiting for data queue_length = 8

Summary

In this demo

  1. you learned to run training and testing for a deep-learning-based segmentation pipeline from the command-line and from python code directly;
  2. you also learned about the NiftyNet configuration files, and how they control the learning and inference process; and
  3. you learned multiple ways to tell NiftyNet which data to use.