Copyright 2018 Google Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

This tutorial is for educational purposes purposes only and is not intended for use in clinical diagnosis or clinical decision-making or for any other clinical use.

Training/Inference on Breast Density Classification Model on Cloud AI Platform

The goal of this tutorial is to train, deploy and run inference on a breast density classification model. Breast density is thought to be a factor for an increase in the risk for breast cancer. This will emphasize using the Cloud Healthcare API in order to store, retreive and transcode medical images (in DICOM format) in a managed and scalable way. This tutorial will focus on using Cloud AI Platform to scalably train and serve the model.

Note: This is the Cloud AI Platform version of the AutoML Codelab found here.

Requirements

Notebook dependencies

We will need to install the hcls_imaging_ml_toolkit package found here. This toolkit helps make working with DICOM objects and the Cloud Healthcare API easier. In addition, we will install dicomweb-client to help us interact with the DIOCOMWeb API and pydicom which is used to help up construct DICOM objects.


In [ ]:
%%bash

pip3 install git+https://github.com/GoogleCloudPlatform/healthcare.git#subdirectory=imaging/ml/toolkit
pip3 install dicomweb-client
pip3 install pydicom

Input Dataset

The dataset that will be used for training is the TCIA CBIS-DDSM dataset. This dataset contains ~2500 mammography images in DICOM format. Each image is given a BI-RADS breast density score from 1 to 4. In this tutorial, we will build a binary classifier that distinguishes between breast density "2" (scattered density) and "3" (heterogeneously dense). These are the two most common and variably assigned scores. In the literature, this is said to be particularly difficult for radiologists to consistently distinguish.


In [0]:
project_id = "MY_PROJECT" # @param
location = "us-central1"
dataset_id = "MY_DATASET" # @param
dicom_store_id = "MY_DICOM_STORE" # @param

# Input data used by Cloud ML must be in a bucket with the following format.
cloud_bucket_name = "gs://" + project_id + "-vcm"

In [0]:
%%bash -s {project_id} {location} {cloud_bucket_name}
# Create bucket.
gsutil -q mb -c regional -l $2 $3

# Allow Cloud Healthcare API to write to bucket.
PROJECT_NUMBER=`gcloud projects describe $1 | grep projectNumber | sed 's/[^0-9]//g'`
SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com"
COMPUTE_ENGINE_SERVICE_ACCOUNT="${PROJECT_NUMBER}-compute@developer.gserviceaccount.com"

gsutil -q iam ch serviceAccount:${SERVICE_ACCOUNT}:objectAdmin $3
gsutil -q iam ch serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT}:objectAdmin $3
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${SERVICE_ACCOUNT} --role=roles/pubsub.publisher
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/pubsub.admin
# Allow compute service account to create datasets and dicomStores.
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.dicomStoreAdmin
gcloud projects add-iam-policy-binding $1 --member=serviceAccount:${COMPUTE_ENGINE_SERVICE_ACCOUNT} --role roles/healthcare.datasetAdmin

In [0]:
import json
import os
import google.auth
from google.auth.transport.requests import AuthorizedSession
from hcls_imaging_ml_toolkit import dicom_path

credentials, project = google.auth.default()
authed_session = AuthorizedSession(credentials)
# Path to Cloud Healthcare API.
HEALTHCARE_API_URL = 'https://healthcare.googleapis.com/v1'

# Create Cloud Healthcare API dataset.
path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets?dataset_id=' + dataset_id)
headers = {'Content-Type': 'application/json'}
resp = authed_session.post(path, headers=headers)

assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))

# Create Cloud Healthcare API DICOM store.
path = os.path.join(HEALTHCARE_API_URL, 'projects', project_id, 'locations', location, 'datasets', dataset_id, 'dicomStores?dicom_store_id=' + dicom_store_id)
resp = authed_session.post(path, headers=headers)
assert resp.status_code == 200, 'error creating DICOM store, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
dicom_store_path = dicom_path.Path(project_id, location, dataset_id, dicom_store_id)

Next, we are going to transfer the DICOM instances to the Cloud Healthcare API.

Note: We are transfering >100GB of data so this will take some time to complete


In [0]:
# Store DICOM instances in Cloud Healthcare API.
path = 'https://healthcare.googleapis.com/v1/{}:import'.format(dicom_store_path)
headers = {'Content-Type': 'application/json'}
body = { 
      'gcsSource': {
        'uri': 'gs://gcs-public-data--healthcare-tcia-cbis-ddsm/dicom/**'
      }
}
resp = authed_session.post(path, headers=headers, json=body)
assert resp.status_code == 200, 'error creating Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
print('Full response:\n{0}'.format(resp.text))
response = json.loads(resp.text)
operation_name = response['name']

In [0]:
import time

def wait_for_operation_completion(path, timeout, sleep_time=30): 
  success = False
  while time.time() < timeout:
    print('Waiting for operation completion...')
    resp = authed_session.get(path)
    assert resp.status_code == 200, 'error polling for Operation results, code: {0}, response: {1}'.format(resp.status_code, resp.text)
    response = json.loads(resp.text)
    if 'done' in response:
      if response['done'] == True and 'error' not in response:
        success = True;
      break
    time.sleep(sleep_time)

  print('Full response:\n{0}'.format(resp.text))      
  assert success, "operation did not complete successfully in time limit"
  print('Success!')
  return response

In [0]:
path = os.path.join(HEALTHCARE_API_URL, operation_name)
timeout = time.time() + 40*60 # Wait up to 40 minutes.
_ = wait_for_operation_completion(path, timeout)

Explore the Cloud Healthcare DICOM dataset (optional)

This is an optional section to explore the Cloud Healthcare DICOM dataset. In the following code, we simply just list the studies that we have loaded into the Cloud Healthcare API. You can modify the num_of_studies_to_print parameter to print as many studies as desired.


In [0]:
num_of_studies_to_print = 2 # @param


path = os.path.join(HEALTHCARE_API_URL, dicom_store_path.dicomweb_path_str, 'studies')
resp = authed_session.get(path)
assert resp.status_code == 200, 'error querying Dataset, code: {0}, response: {1}'.format(resp.status_code, resp.text)
response = json.loads(resp.text)

print(json.dumps(response[:num_of_studies_to_print], indent=2))

Convert DICOM to JPEG

The ML model that we will build requires that the dataset be in JPEG. We will leverage the Cloud Healthcare API to transcode DICOM to JPEG.

First we will create a Google Cloud Storage bucket to hold the output JPEG files. Next, we will use the ExportDicomData API to transform the DICOMs to JPEGs.


In [0]:
jpeg_bucket = cloud_bucket_name + "/images/"

Next we will convert the DICOMs to JPEGs using the ExportDicomData.


In [0]:
%%bash -s {jpeg_bucket} {project_id} {location} {dataset_id} {dicom_store_id}
gcloud beta healthcare --project $2  dicom-stores export gcs $5 --location=$3 --dataset=$4 --mime-type="image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50" --gcs-uri-prefix=$1

We will use the Operation name returned from the previous command to poll the status of ExportDicomData. We will poll for operation completeness, which should take a few minutes. When the operation is complete, the operation's done field will be set to true.

Meanwhile, you should be able to observe the JPEG images being added to your Google Cloud Storage bucket.

Training

We will use Transfer Learning to retrain a generically trained trained model to perform breast density classification. Specifically, we will use an Inception V3 checkpoint as the starting point.

The neural network we will use can roughly be split into two parts: "feature extraction" and "classification". In transfer learning, we take advantage of a pre-trained (checkpoint) model to do the "feature extraction", and add a few layers to perform the "classification" relevant to the specific problem. In this case, we are adding aa dense layer with two neurons to do the classification and a softmax layer to normalize the classification score. The mammography images will be classified as either "2" (scattered density) or "3" (heterogeneously dense). See below for diagram of the training process:

The "feature extraction" and the "classification" part will be done in the following steps, respectively.

Preprocess Raw Images using Cloud Dataflow

In this step, we will resize images to 300x300 (required for Inception V3) and will run each image through the checkpoint Inception V3 model to calculate the bottleneck values. This is the feature vector for the output of the feature extraction part of the model (the part that is already pre-trained). Since this process is resource intensive, we will utilize Cloud Dataflow in order to do this scalably. We extract the features and calculate the bottleneck values here for performance reasons - so that we don't have to recalculate them during training.

The output of this process will be a collection of TFRecords storing the bottleneck value for each image in the input dataset. This TFRecord format is commonly used to store Tensors in binary format for storage.

Finally, in this step, we will also split the input dataset into training, validation or testing. The percentage of each can be modified using the parameters below.


In [0]:
# GCS Bucket to store output TFRecords.
bottleneck_bucket = cloud_bucket_name + "/bottleneck" # @param

# Percentage of dataset to allocate for validation and testing.
validation_percentage = 10 # @param
testing_percentage = 10 # @param

# Number of Dataflow workers. This can be increased to improve throughput.
dataflow_num_workers = 5 # @param

# Staging bucket for training.
staging_bucket = cloud_bucket_name # @param

The following command will kick off a Cloud Dataflow pipeline that runs preprocessing. The script that has the relevant code is preprocess.py. You can check out how the pipeline is progressing here.

When the operation is done, we will begin training the classification layers.


In [0]:
%%bash -s {project_id} {jpeg_bucket} {bottleneck_bucket} {validation_percentage} {testing_percentage} {dataflow_num_workers} {staging_bucket}

# Install Python library dependencies.
pip install virtualenv
python3 -m virtualenv env
source env/bin/activate
pip install tensorflow==1.15.0 google-apitools apache_beam[gcp]==2.18.0
# Start job in Cloud Dataflow and wait for completion.
python3 -m scripts.preprocess.preprocess \
    --project $1 \
    --input_path $2 \
    --output_path "$3/record" \
    --num_workers $6 \
    --temp_location "$7/temp" \
    --staging_location "$7/staging" \
    --validation_percentage $4 \
    --testing_percentage $5

Train the Classification Layers of Model using Cloud AI Platform

In this step, we will train the classification layers of the model. This consists of just a dense and softmax layer. We will use the bottleneck values calculated at the previous step as the input to these layers. We will use Cloud AI Platform to train the model. The output of stage will be a trained model exported to GCS, which can be used for inference.

There are various training parameters below that can be tuned.


In [0]:
training_steps = 1000 # @param
learning_rate = 0.01 # @param

# Location of exported model.
exported_model_bucket = cloud_bucket_name + "/models" # @param


# Inference requires the exported model to be versioned (by default we choose version 1).
exported_model_versioned_uri = exported_model_bucket + "/1"

We'll invoke Cloud AI Platform with the above parameters. We use a GPU for training to speed up operations. The script that does the training is model.py


In [0]:
%%bash -s {location} {bottleneck_bucket} {staging_bucket} {training_steps} {learning_rate} {exported_model_versioned_uri}

# Start training on CAIP.
gcloud ai-platform jobs submit training breast_density \
    --python-version 3.7 \
    --runtime-version 1.15 \
    --scale-tier BASIC_GPU \
    --module-name "scripts.trainer.model" \
    --package-path scripts \
    --staging-bucket $3 \
    --region $1 \
    -- \
    --bottleneck_dir "$2/record" \
    --training_steps $4 \
    --learning_rate $5 \
    --export_model_path $6

You can monitor the status of the training job by running the following command. The job can take a few minutes to start-up.


In [0]:
!gcloud ai-platform jobs describe breast_density

When the job has started, you can observe the logs for the training job by executing the below command (it will poll for new logs every 30 seconds).

As training progresses, the logs will output the accuracy on the training set, validation set, as well as the cross entropy. You'll generally see that the accuracy goes up, while the cross entropy goes down as the number of training iterations increases.

Finally, when the training is complete, the accuracy of the model on the held-out test set will be output to console. The job can take a few minutes to shut-down.


In [0]:
!gcloud ai-platform jobs stream-logs breast_density --polling-interval=30

Deployment and Getting Predictions

Cloud AI Platform (CAIP) can also be used to serve the model for inference. The inference model is composed of the pre-trained Inception V3 checkpoint, along with the classification layers we trained above for breast density. First we set the inference model name/version and select a mammography image to test out.


In [0]:
model_name = "breast_density" # @param
deployment_version = "deployment" # @param

# The full name of the model.
full_model_name = "projects/" + project_id + "/models/" + model_name + "/versions/" + deployment_version

!gcloud ai-platform models create $model_name --regions $location
!gcloud ai-platform versions create $deployment_version --model $model_name --origin $exported_model_versioned_uri --runtime-version 1.15 --python-version 3.7

In [0]:
# DICOM Study/Series UID of input mammography image that we'll test.
input_mammo_study_uid = "1.3.6.1.4.1.9590.100.1.2.85935434310203356712688695661986996009" # @param
input_mammo_series_uid = "1.3.6.1.4.1.9590.100.1.2.374115997511889073021386151921807063992" # @param
input_mammo_instance_uid = "1.3.6.1.4.1.9590.100.1.2.289923739312470966435676008311959891294" # @param

Let's run inference for the image and observe the results. We should see the returned label as well as the score.


In [0]:
from base64 import b64encode, b64decode
import io
from PIL import Image
import tensorflow as tf

_INCEPTION_V3_SIZE = 299

input_file_path = os.path.join(jpeg_bucket, input_mammo_study_uid, input_mammo_series_uid, input_mammo_instance_uid + ".jpg")
with tf.io.gfile.GFile(input_file_path, 'rb') as example_img:
    # Resize the image to InceptionV3 input size.
    im = Image.open(example_img).resize((_INCEPTION_V3_SIZE,_INCEPTION_V3_SIZE))
    imgByteArr = io.BytesIO()
    im.save(imgByteArr, format='JPEG')
    b64str = b64encode(imgByteArr.getvalue()).decode('utf-8')
    with open('input_image.json', 'a') as outfile:
        json.dump({'inputs': [{'b64': b64str}]}, outfile)
        outfile.write('\n')

predictions = !gcloud ai-platform predict --model $model_name --version $deployment_version --json-instances='input_image.json'
print(predictions)

Getting Explanations

There are limits and caveats when using the Explainable AI feature on CAIP. Read about them here.

The Explainable AI feature of CAIP can be used to provide visibility as to why the model returned a prediction for a given input. In this codelab, we are going to use this feature to figure out which pixels in the example mammography image contributed the most to the prediction. This can be useful for debugging model performance and improving the confidence in the model. Read here for more details. See below for sample output.

To get started, we will first deploy the model to CAIP that has explainable AI enabled.


In [0]:
explainable_version = "explainable_ai" # @param

We'll create an Explainable AI configuration file. This will allow us to specify the input and the output tensor to correlate. See here for more details. Below we'll correlate the input image tensor with the output of the softmax layer. The Explanation AI configuration file is required to be stored in the model directory.


In [0]:
import json
import os
import scripts.constants as constants

explainable_metadata = {
  "outputs": {
    "probability": {
      "output_tensor_name": constants.OUTPUT_SOFTMAX_TENSOR_NAME + ":0",
    }
  },
  "inputs": {
    "img_bytes": {
      "input_tensor_name": constants.INPUT_PIXELS_TENSOR_NAME + ":0",
      "input_tensor_type": "numeric",
      "modality": "image",
    }
  },
 "framework": "tensorflow"
}

# The configuration file in the CAIP model directory.
with tf.io.gfile.GFile(os.path.join(exported_model_versioned_uri, 'explanation_metadata.json'), 'w') as output_file:
  json.dump(explainable_metadata, output_file)

Finally, let's deploy the model.


In [0]:
!gcloud beta ai-platform versions create $explainable_version \
--model $model_name\
--origin $exported_model_versioned_uri \
--runtime-version 1.15 \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method integrated-gradients \
--num-integral-steps 25

Next, we'll ask for the annotated image that includes the Explainable AI overlay.


In [0]:
explanations = !gcloud beta ai-platform explain --model $model_name --version $explainable_version --json-instances='input_image.json'
response = json.loads(explanations.s)

Next, lets print the annotated image (with overlay). We can see green highlights for the pixels that give the biggest signal for the highest scoring class.


In [0]:
import base64
import io
from PIL import Image

assert len(response['explanations']) == 1

LABELS = ['2', '3']
prediction = response['explanations'][0]
predicted_label = LABELS[prediction['attributions_by_label'][0]['label_index']]
confidence = prediction['attributions_by_label'][0]['example_score']

print('Predicted class: ', predicted_label)
print('Confidence: ', confidence)

b64str = prediction['attributions_by_label'][0]['attributions']['img_bytes']['b64_jpeg']
display(Image.open(io.BytesIO(base64.b64decode(b64str))))

Integration in the clinical workflow

To allow medical imaging ML models to be easily integrated into clinical workflows, an inference module can be used. A standalone modality, a PACS system or a DICOM router can push DICOM instances into Cloud Healthcare DICOM stores, allowing ML models to be triggered for inference. This inference results can then be structured into various DICOM formats (e.g. DICOM structured reports) and stored in the Cloud Healthcare API, which can then be retrieved by the customer.

The inference module is built as a Docker container and deployed using Kubernetes, allowing you to easily scale your deployment. The dataflow for inference can look as follows (see corresponding diagram below):

  1. Client application uses STOW-RS to push a new DICOM instance to the Cloud Healthcare DICOMWeb API.

  2. The insertion of the DICOM instance triggers a Cloud Pubsub message to be published. The inference module will pull incoming Pubsub messages and will recieve a message for the previously inserted DICOM instance.

  3. The inference module will retrieve the instance in JPEG format from the Cloud Healthcare API using WADO-RS.

  4. The inference module will send the JPEG bytes to the model hosted on Cloud AI Platform.

  5. Cloud AI Platform will return the prediction back to the inference module.

  6. The inference module will package the prediction into a DICOM instance. This can potentially be a DICOM structured report, presentation state, or even burnt text on the image. In this codelab, we will focus on just DICOM structured reports. The structured report is then stored back in the Cloud Healthcare API using STOW-RS.

  7. The client application can query for (or retrieve) the structured report by using QIDO-RS or WADO-RS. Pubsub can also be used by the client application to poll for the newly created DICOM structured report instance.

To begin, we will create a new DICOM store that will store our inference source (DICOM mammography instance) and results (DICOM structured report). In order to enable Pubsub notifications to be triggered on inserted instances, we will give the DICOM store a Pubsub channel to publish on.


In [0]:
# Pubsub config.
pubsub_topic_id = "MY_PUBSUB_TOPIC_ID" # @param
pubsub_subscription_id = "MY_PUBSUB_SUBSRIPTION_ID" # @param

# DICOM Store for store DICOM used for inference.
inference_dicom_store_id = "MY_INFERENCE_DICOM_STORE" # @param

pubsub_subscription_name = "projects/" + project_id + "/subscriptions/" + pubsub_subscription_id
inference_dicom_store_path = dicom_path.FromPath(dicom_store_path, store_id=inference_dicom_store_id)

In [0]:
%%bash -s {pubsub_topic_id} {pubsub_subscription_id} {project_id} {location} {dataset_id} {inference_dicom_store_id}

# Create Pubsub channel.
gcloud beta pubsub topics create $1
gcloud beta pubsub subscriptions create $2 --topic $1

# Create a Cloud Healthcare DICOM store that published on given Pubsub topic.
TOKEN=`gcloud beta auth application-default print-access-token`
NOTIFICATION_CONFIG="{notification_config: {pubsub_topic: \"projects/$3/topics/$1\"}}"
curl -s -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${TOKEN}" -d "${NOTIFICATION_CONFIG}" https://healthcare.googleapis.com/v1/projects/$3/locations/$4/datasets/$5/dicomStores?dicom_store_id=$6

# Enable Cloud Healthcare API to publish on given Pubsub topic.
PROJECT_NUMBER=`gcloud projects describe $3 | grep projectNumber | sed 's/[^0-9]//g'`
SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-healthcare.iam.gserviceaccount.com"
gcloud beta pubsub topics add-iam-policy-binding $1 --member="serviceAccount:${SERVICE_ACCOUNT}" --role="roles/pubsub.publisher"

Next, we will building the inference module using Cloud Build API. This will create a Docker container that will be stored in Google Container Registry. The inference module code is found in inference.py. The build script used to build the Docker container for this module is cloudbuild.yaml. Progress of build may be found on cloud build dashboard.


In [0]:
%%bash -s {project_id}
PROJECT_ID=$1

gcloud builds submit --config scripts/inference/cloudbuild.yaml --timeout 1h scripts/inference

Next, we will deploy the inference module to Kubernetes.

Then we create a Kubernetes Cluster and a Deployment for the inference module.


In [0]:
%%bash -s {project_id} {location} {pubsub_subscription_name} {full_model_name} {inference_dicom_store_path}
gcloud container clusters create inference-module --region=$2 --scopes https://www.googleapis.com/auth/cloud-platform --num-nodes=1

PROJECT_ID=$1
SUBSCRIPTION_PATH=$3
MODEL_PATH=$4
INFERENCE_DICOM_STORE_PATH=$5

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: inference-module
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: inference-module
    spec:
      containers:
        - name: inference-module
          image: gcr.io/${PROJECT_ID}/inference-module:latest
          command:
            - "/opt/inference_module/bin/inference_module"
            - "--subscription_path=${SUBSCRIPTION_PATH}"
            - "--model_path=${MODEL_PATH}"
            - "--dicom_store_path=${INFERENCE_DICOM_STORE_PATH}"
            - "--prediction_service=CAIP"
EOF

Next, we will store a mammography DICOM instance from the TCIA dataset to the DICOM store. This is the image that we will request inference for. Pushing this instance to the DICOM store will result in a Pubsub message, which will trigger the inference module.


In [0]:
# DICOM Study/Series UID of input mammography image that we'll push for inference.
input_mammo_study_uid = "1.3.6.1.4.1.9590.100.1.2.85935434310203356712688695661986996009"
input_mammo_series_uid = "1.3.6.1.4.1.9590.100.1.2.374115997511889073021386151921807063992"
input_mammo_instance_uid = "1.3.6.1.4.1.9590.100.1.2.289923739312470966435676008311959891294"

In [0]:
from google.cloud import storage
from dicomweb_client.api import DICOMwebClient
from dicomweb_client import session_utils
from pydicom


storage_client = storage.Client()
bucket = storage_client.bucket('gcs-public-data--healthcare-tcia-cbis-ddsm', user_project=project_id)
blob = bucket.blob("dicom/{}/{}/{}.dcm".format(input_mammo_study_uid,input_mammo_series_uid,input_mammo_instance_uid))
blob.download_to_filename('example.dcm')
dataset = pydicom.dcmread('example.dcm')
session = session_utils.create_session_from_gcp_credentials()
study_path = dicom_path.FromPath(inference_dicom_store_path, study_uid=input_mammo_study_uid)
dicomweb_url = os.path.join(HEALTHCARE_API_URL, study_path.dicomweb_path_str)
dcm_client = DICOMwebClient(dicomweb_url, session)
dcm_client.store_instances(datasets=[dataset])

You should be able to observe the inference module's logs by running the following command. In the logs, you should observe that the inference module successfully recieved the the Pubsub message and ran inference on the DICOM instance. The logs should also include the inference results. It can take a few minutes to start-up the Kubernetes deployment, so you many have to run this a few times.


In [0]:
!kubectl logs -l app=inference-module

You can also query the Cloud Healthcare DICOMWeb API (using QIDO-RS) to see that the DICOM structured report has been inserted for the study. The structured report contents can be found under tag "0040A730".

You can optionally also use WADO-RS to recieve the instance (e.g. for viewing).


In [0]:
dcm_client.search_for_instances(study_path.study_uid, fields=['all'])