This demo provides examples of ImageReader class from niftynet.io.image_reader module.

What is ImageReader?

The main functionality of ImageReader is to search a set of folders, return a list of image files, and load the images into memory in an iterative manner.

A tf.data.Dataset instance can be initialised from an ImageReader, this makes the module readily usable as an input op to many tensorflow-based applications.

Why ImageReader?

  • designed for medical imaging formats and applications
  • works well with multi-modal input volumes
  • works well with tf.data.Dataset

Before the demo...

First make sure the source code is available, and import the module.

For NiftyNet installation, please checkout:

http://niftynet.readthedocs.io/en/dev/installation.html


In [3]:
import sys
niftynet_path = '/Users/bar/Documents/Niftynet/'
sys.path.insert(0, niftynet_path)

from niftynet.io.image_reader import ImageReader

For demonstration purpose we download some demo data to ~/niftynet/data/:


In [4]:
from niftynet.utilities.download import download
download('anisotropic_nets_brats_challenge_model_zoo')


Accessing: https://github.com/NifTK/NiftyNetModelZoo
anisotropic_nets_brats_challenge_model_zoo: OK. 
Already downloaded. Use the -r option to download again.
Out[4]:
True

Use case: loading 3D volumes


In [ ]:
from niftynet.io.image_reader import ImageReader

data_param = {'MR': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG'}}
reader = ImageReader().initialise(data_param)

In [ ]:
reader.shapes, reader.tf_dtypes

In [ ]:
# read data using the initialised reader
idx, image_data, interp_order = reader(idx=0)

In [ ]:
image_data['MR'].shape, image_data['MR'].dtype

In [ ]:
# randomly sample the list of images
for _ in range(3):
    idx, image_data, _ = reader()
    print('{} image: {}'.format(idx, image_data['MR'].shape))

The images are always read into a 5D-array, representing:

[height, width, depth, time, channels]

User case: loading pairs of image and label by matching filenames

(In this case the loaded arrays are not concatenated.)


In [ ]:
from niftynet.io.image_reader import ImageReader

data_param = {'image': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'T2'},
              'label': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'Label'}}
reader = ImageReader().initialise(data_param)

In [ ]:
# image file information (without loading the volumes)
reader.get_subject(0)

In [ ]:
idx, image_data, interp_order = reader(idx=0)

image_data['image'].shape, image_data['label'].shape

User case: loading multiple modalities of image and label by matching filenames

The following code initialises a reader with four modalities, and the 'image' output is a concatenation of arrays loaded from these files. (The files are concatenated at the fifth dimension)


In [ ]:
from niftynet.io.image_reader import ImageReader

data_param = {'T1':    {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'T1', 'filename_not_contains': 'T1c'},
              'T1c':   {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'T1c'},
              'T2':    {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'T2'},
              'Flair': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'Flair'},
              'label': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'Label'}}
grouping_param = {'image': ('T1', 'T1c', 'T2', 'Flair'), 'label':('label',)}
reader = ImageReader().initialise(data_param, grouping_param)

In [ ]:
_, image_data, _ = reader(idx=0)

In [ ]:
image_data['image'].shape, image_data['label'].shape

More properties

The input specification supports additional properties include

{'csv_file', 'path_to_search',
 'filename_contains', 'filename_not_contains',
 'interp_order', 'pixdim', 'axcodes', 'spatial_window_size',
 'loader'}

see also: http://niftynet.readthedocs.io/en/dev/config_spec.html#input-data-source-section

Using ImageReader with image-level data augmentation layers


In [ ]:
from niftynet.io.image_reader import ImageReader
from niftynet.layer.rand_rotation import RandomRotationLayer as Rotate

data_param = {'MR': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG'}}
reader = ImageReader().initialise(data_param)

rotation_layer = Rotate()
rotation_layer.init_uniform_angle([-10.0, 10.0])
reader.add_preprocessing_layers([rotation_layer])

_, image_data, _ = reader(idx=0)
image_data['MR'].shape

# import matplotlib.pyplot as plt
# plt.imshow(image_data['MR'][:, :, 50, 0, 0])
# plt.show()

Using ImageReader with tf.data.Dataset


In [ ]:
import tensorflow as tf
from niftynet.io.image_reader import ImageReader

# initialise multi-modal image and label reader
data_param = {'T1':    {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'T1', 'filename_not_contains': 'T1c'},
              'T1c':   {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'T1c'},
              'T2':    {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'T2'},
              'Flair': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'Flair'},
              'label': {'path_to_search': '~/niftynet/data/BRATS_examples/HGG',
                        'filename_contains': 'Label'}}

grouping_param = {'image': ('T1', 'T1c', 'T2', 'Flair'), 'label':('label',)}
reader = ImageReader().initialise(data_param, grouping_param)

# reader as a generator
def image_label_pair_generator():
    """
    A generator wrapper of an initialised reader.
    
    :yield: a dictionary of images (numpy arrays).
    """
    while True:
        _, image_data, _ = reader()
        yield image_data

# tensorflow dataset
dataset = tf.data.Dataset.from_generator(
    image_label_pair_generator,
    output_types=reader.tf_dtypes)
    #output_shapes=reader.shapes)
dataset = dataset.batch(1)
iterator = dataset.make_initializable_iterator()

# run the tensorlfow graph
with tf.Session() as sess:
    sess.run(iterator.initializer)
    for _ in range(3):
        data_dict = sess.run(iterator.get_next())
        print(data_dict.keys())
        print('image: {}, label: {}'.format(
            data_dict['image'].shape,
            data_dict['label'].shape))