Predict heart failure with Watson Machine Learning

This notebook contains steps and code to create a predictive model to predict heart failure and then deploy that model to Watson Machine Learning so it can be used in an application.

Learning Goals

The learning goals of this notebook are:

  • Load a CSV file into the Object Storage Service linked to your Data Science Experience
  • Create an Apache® Spark machine learning model
  • Train and evaluate a model
  • Persist a model in a Watson Machine Learning repository

1. Setup

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Create a Watson Machine Learning Service instance (a free plan is offered) and associate it with your project
  • Upload heart failure data to the Object Store service that is part of your data Science Experience trial

2. Load and explore data

In this section you will load the data as an Apache® Spark DataFrame and perform a basic exploration.

Load the data to the Spark DataFrame from your associated Object Storage instance.


In [ ]:
# IMPORTANT Follow the lab instructions to insert authentication and access info here to get access to the data used in this notebook
 

  .option('inferSchema', 'True')\

Explore the loaded data by using the following Apache® Spark DataFrame methods:

  • print schema
  • print top ten records
  • count all records

In [ ]:
df_data.printSchema()

As you can see, the data contains ten fields. The HEARTFAILURE field is the one we would like to predict (label).


In [ ]:
df_data.show()

In [ ]:
df_data.describe().show()

In [ ]:
df_data.count()

As you can see, the data set contains 10800 records.

3 Interactive Visualizations w/PixieDust


In [ ]:
# To confirm you have the latest version of PixieDust on your system, run this cell
!pip install --user --upgrade pixiedust

If indicated by the installer, restart the kernel and rerun the notebook until here and continue with the workshop.


In [ ]:
import pixiedust

Simple visualization using bar charts

With PixieDust display(), you can visually explore the loaded data using built-in charts, such as, bar charts, line charts, scatter plots, or maps. To explore a data set: choose the desired chart type from the drop down, configure chart options, configure display options.


In [ ]:
display(df_data)

4. Create an Apache® Spark machine learning model

In this section you will learn how to prepare data, create and train an Apache® Spark machine learning model.

4.1: Prepare data

In this subsection you will split your data into: train and test data sets.


In [ ]:
split_data = df_data.randomSplit([0.8, 0.20], 24)
train_data = split_data[0]
test_data = split_data[1]


print "Number of training records: " + str(train_data.count())
print "Number of testing records : " + str(test_data.count())

As you can see our data has been successfully split into two data sets:

  • The train data set, which is the largest group, is used for training.
  • The test data set will be used for model evaluation and is used to test the assumptions of the model.

4.2: Create pipeline and train a model

In this section you will create an Apache® Spark machine learning pipeline and then train the model. In the first step you need to import the Apache® Spark machine learning packages that will be needed in the subsequent steps.

A sequence of data processing is called a data pipeline. Each step in the pipeline processes the data and passes the result to the next step in the pipeline, this allows you to transform and fit your model with the raw input data.


In [ ]:
from pyspark.ml.feature import StringIndexer, IndexToString, VectorAssembler
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.ml import Pipeline, Model

In the following step, convert all the string fields to numeric ones by using the StringIndexer transformer.


In [ ]:
stringIndexer_label = StringIndexer(inputCol="HEARTFAILURE", outputCol="label").fit(df_data)
stringIndexer_sex = StringIndexer(inputCol="SEX", outputCol="SEX_IX")
stringIndexer_famhist = StringIndexer(inputCol="FAMILYHISTORY", outputCol="FAMILYHISTORY_IX")
stringIndexer_smoker = StringIndexer(inputCol="SMOKERLAST5YRS", outputCol="SMOKERLAST5YRS_IX")

In the following step, create a feature vector by combining all features together.


In [ ]:
vectorAssembler_features = VectorAssembler(inputCols=["AVGHEARTBEATSPERMIN","PALPITATIONSPERDAY","CHOLESTEROL","BMI","AGE","SEX_IX","FAMILYHISTORY_IX","SMOKERLAST5YRS_IX","EXERCISEMINPERWEEK"], outputCol="features")

Next, define estimators you want to use for classification. Random Forest is used in the following example.


In [ ]:
rf = RandomForestClassifier(labelCol="label", featuresCol="features")

Finally, indexed labels back to original labels.


In [ ]:
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=stringIndexer_label.labels)

In [ ]:
transform_df_pipeline = Pipeline(stages=[stringIndexer_label, stringIndexer_sex, stringIndexer_famhist, stringIndexer_smoker, vectorAssembler_features])
transformed_df = transform_df_pipeline.fit(df_data).transform(df_data)
transformed_df.show()

Let's build the pipeline now. A pipeline consists of transformers and an estimator.


In [ ]:
pipeline_rf = Pipeline(stages=[stringIndexer_label, stringIndexer_sex, stringIndexer_famhist, stringIndexer_smoker, vectorAssembler_features, rf, labelConverter])

Now, you can train your Random Forest model by using the previously defined pipeline and training data.


In [ ]:
model_rf = pipeline_rf.fit(train_data)

You can check your model accuracy now. To evaluate the model, use test data.


In [ ]:
predictions = model_rf.transform(test_data)
evaluatorRF = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy")
accuracy = evaluatorRF.evaluate(predictions)
print("Accuracy = %g" % accuracy)
print("Test Error = %g" % (1.0 - accuracy))

You can tune your model now to achieve better accuracy. For simplicity of this example tuning section is omitted.

5. Persist model

In this section you will learn how to store your pipeline and model in Watson Machine Learning repository by using Python client libraries. First, you must import client libraries.


In [ ]:
from repository.mlrepositoryclient import MLRepositoryClient
from repository.mlrepositoryartifact import MLRepositoryArtifact

Authenticate to Watson Machine Learning service on Bluemix.

STOP here !!!!:

Put authentication information (username and password) from your instance of Watson Machine Learning service here.


In [ ]:
service_path = 'https://ibm-watson-ml.mybluemix.net'
username = 'xxxxxxxxxxxxxxx'
password = 'xxxxxxxxxxxxxxx'

Tip: service_path, username and password can be found on Service Credentials tab of the Watson Machine Learning service instance created in Bluemix.


In [ ]:
ml_repository_client = MLRepositoryClient(service_path)
ml_repository_client.authorize(username, password)

Create model artifact (abstraction layer).


In [ ]:
pipeline_artifact = MLRepositoryArtifact(pipeline_rf, name="pipeline")
model_artifact = MLRepositoryArtifact(model_rf, training_data=train_data, name="Heart Failure Prediction Model", pipeline_artifact=pipeline_artifact)

Tip: The MLRepositoryArtifact method expects a trained model object, training data, and a model name. (It is this model name that is displayed by the Watson Machine Learning service).

5.1: Save pipeline and model

In this subsection you will learn how to save pipeline and model artifacts to your Watson Machine Learning instance.


In [ ]:
saved_model = ml_repository_client.models.save(model_artifact)

Get saved model metadata from Watson Machine Learning. Tip: Use meta.availableProps to get the list of available props.


In [ ]:
saved_model.meta.available_props()

In [ ]:
print "modelType: " + saved_model.meta.prop("modelType")
print "trainingDataSchema: " + str(saved_model.meta.prop("trainingDataSchema"))
print "creationTime: " + str(saved_model.meta.prop("creationTime"))
print "modelVersionHref: " + saved_model.meta.prop("modelVersionHref")
print "label: " + saved_model.meta.prop("label")

5.2 Load model to verify that it was saved correctly

You can load your model to make sure that it was saved correctly.


In [ ]:
loadedModelArtifact = ml_repository_client.models.get(saved_model.uid)

Print the model name to make sure that model artifact has been loaded correctly.


In [ ]:
print str(loadedModelArtifact.name)

Congratulations. You've sucessfully created a predictive model and saved it in the Watson Machine Learning service. You can now switch to the Watson Machine Learning console to deploy the model and then test it in application.