2 Classes (benign, maligant) Prediction with SVM

A support vector classification machine with the RBF Kernel (C=1 and gamma=0.001) was built here. And two sets of image data were tested with the model.

  • For Raw DDSM images, SVM model had an overall accuracy of 69.8%.
  • For Threshold images, SVM model had an overall accuracy of 69.6%.

In [6]:
import datetime
import gc
import numpy as np
import os
import random
from scipy import misc
import string
import time
import sys
import sklearn.metrics as skm
import collections
from sklearn.svm import SVC
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
from sklearn import metrics
import dwdii_bc_model_helper as bc

random.seed(20275)
np.set_printoptions(precision=2)

Raw DDSM images


In [13]:
imagePath = "png"
trainImagePath = imagePath
trainDataPath = "data/ddsm_train.csv"
testDataPath = "data/ddsm_test.csv"
categories = bc.bcNumerics()
imgResize = (150, 150)
normalVsAbnormal=False

In [14]:
os.listdir('data')


Out[14]:
['ddsm_test.csv', 'ddsm_train.csv', 'ddsm_val.csv', 'mias_all.csv']

In [15]:
metaData, meta2, mCounts = bc.load_training_metadata(trainDataPath, balanceViaRemoval=True, verbose=True, 
                                                     normalVsAbnormal=normalVsAbnormal)


Raw Balance
----------------
benign 531
malignant 739
normal 2685
balanaceViaRemoval.avgE: 1318
balanaceViaRemoval.theshold: 1318.0

After Balancing
----------------
benign 531
malignant 739
normal 862

In [16]:
thesePathos = ['benign','malignant']

In [17]:
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_data, Y_data = bc.load_data(trainDataPath, trainImagePath, 
                              categories=categories,
                              maxData = maxData, 
                              verboseFreq = 50, 
                              imgResize=imgResize, 
                              thesePathos=thesePathos,
                              normalVsAbnormal=normalVsAbnormal)
print X_data.shape
print Y_data.shape


Raw Balance
----------------
benign 531
malignant 739
normal 2685
balanaceViaRemoval.avgE: 1318
balanaceViaRemoval.theshold: 1318.0

After Balancing
----------------
benign 531
malignant 739
normal 862
0.0000: C_0418_1.LEFT_CC.LJPEG.png
0.0235: C_0035_1.RIGHT_MLO.LJPEG.png
0.0469: C_0390_1.LEFT_CC.LJPEG.png
0.0704: B_3434_1.LEFT_CC.LJPEG.png
0.0938: C_0381_1.RIGHT_CC.LJPEG.png
0.1173: C_0350_1.RIGHT_MLO.LJPEG.png
0.1407: B_3501_1.RIGHT_CC.LJPEG.png
0.1642: B_3460_1.LEFT_CC.LJPEG.png
0.1876: C_0331_1.LEFT_MLO.LJPEG.png
0.2111: B_3489_1.RIGHT_MLO.LJPEG.png
0.2345: C_0016_1.RIGHT_CC.LJPEG.png
0.2580: B_3029_1.LEFT_MLO.LJPEG.png
0.2814: C_0037_1.LEFT_MLO.LJPEG.png
0.3049: C_0382_1.RIGHT_MLO.LJPEG.png
0.3283: B_3134_1.RIGHT_MLO.LJPEG.png
0.3518: B_3513_1.RIGHT_MLO.LJPEG.png
0.3752: C_0076_1.RIGHT_CC.LJPEG.png
0.3987: A_1070_1.RIGHT_CC.LJPEG.png
0.4221: C_0245_1.RIGHT_MLO.LJPEG.png
0.4456: A_1022_1.LEFT_MLO.LJPEG.png
0.4690: A_1063_1.LEFT_MLO.LJPEG.png
0.4925: C_0344_1.LEFT_CC.LJPEG.png
0.5159: A_1004_1.LEFT_CC.LJPEG.png
0.5394: B_3367_1.LEFT_MLO.LJPEG.png
0.5629: C_0373_1.RIGHT_CC.LJPEG.png
0.5863: B_3432_1.RIGHT_MLO.LJPEG.png
(1270L, 150L, 150L)
(1270L, 1L)

In [18]:
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_test, Y_test = bc.load_data(testDataPath, imagePath, 
                              categories=categories,
                              maxData = maxData, 
                              verboseFreq = 50, 
                              imgResize=imgResize, 
                              thesePathos=thesePathos,
                              normalVsAbnormal=normalVsAbnormal)
print X_test.shape
print Y_test.shape


Raw Balance
----------------
benign 142
malignant 179
normal 658
balanaceViaRemoval.avgE: 326
balanaceViaRemoval.theshold: 326.0

After Balancing
----------------
benign 142
malignant 179
normal 215
0.0000: B_3380_1.RIGHT_MLO.LJPEG.png
0.0235: C_0241_1.LEFT_MLO.LJPEG.png
0.0469: C_0044_1.LEFT_MLO.LJPEG.png
0.0704: B_3003_1.RIGHT_MLO.LJPEG.png
0.0938: C_0241_1.LEFT_CC.LJPEG.png
0.1173: B_3013_1.RIGHT_MLO.LJPEG.png
0.1407: C_0103_1.LEFT_CC.LJPEG.png
(321L, 150L, 150L)
(321L, 1L)

In [19]:
X_train = X_data
Y_train = Y_data

In [20]:
print X_train.shape
print X_test.shape

print Y_train.shape
print Y_test.shape


(1270L, 150L, 150L)
(321L, 150L, 150L)
(1270L, 1L)
(321L, 1L)

In [21]:
def yDist(y):
    bcCounts = collections.defaultdict(int)
    for a in range(0, y.shape[0]):
        bcCounts[y[a][0]] += 1
    return bcCounts

print "Y_train Dist: " + str(yDist(Y_train))
print "Y_test Dist: " + str(yDist(Y_test))


Y_train Dist: defaultdict(<type 'int'>, {1: 531, 2: 739})
Y_test Dist: defaultdict(<type 'int'>, {1: 142, 2: 179})

In [22]:
X_train_s = X_train.reshape((1270,-1))

In [23]:
X_test_s = X_test.reshape((321,-1))

In [24]:
Y_train_s = Y_train.ravel()

In [30]:
model = SVC(C=1.0, gamma=0.001, kernel='rbf')

In [31]:
model.fit(X_train_s,Y_train_s)


Out[31]:
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
  decision_function_shape=None, degree=3, gamma=0.001, kernel='rbf',
  max_iter=-1, probability=False, random_state=None, shrinking=True,
  tol=0.001, verbose=False)

In [32]:
predicted = model.predict(X_test_s)
expected = Y_test

In [33]:
svm_matrix = skm.confusion_matrix(Y_test, predicted)
svm_matrix


Out[33]:
array([[ 73,  69],
       [ 28, 151]])

In [34]:
print metrics.accuracy_score(expected,predicted)


0.697819314642

In [42]:
numBC = bc.reverseDict(categories)
class_names = numBC.values()
np.set_printoptions(precision=2)

In [43]:
# Plot non-normalized confusion matrix
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names[1:3],
                      title='Confusion Matrix without normalization')
plt.savefig('raw_class2_bVm_o_norm.png')


Confusion matrix, without normalization
[[ 73  69]
 [ 28 151]]

In [44]:
from IPython.display import Image
Image(filename='raw_class2_bVm_o_norm.png')


Out[44]:

In [45]:
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names[1:3], normalize=True,
                      title='Confusion Matrix with normalization')
plt.savefig('raw_class2_bVm_norm.png')


Normalized confusion matrix
[[ 0.51  0.49]
 [ 0.16  0.84]]

In [46]:
# Load the image we just saved
from IPython.display import Image
Image(filename='raw_class2_bVm_norm.png')


Out[46]:

Threshold Images


In [47]:
imagePath = "DDSM_threshold"
trainImagePath = imagePath
trainDataPath = "data/ddsm_train.csv"
testDataPath = "data/ddsm_test.csv"
categories = bc.bcNumerics()
imgResize = (150, 150)
normalVsAbnormal=False

In [48]:
os.listdir('data')


Out[48]:
['ddsm_test.csv', 'ddsm_train.csv', 'ddsm_val.csv', 'mias_all.csv']

In [49]:
metaData, meta2, mCounts = bc.load_training_metadata(trainDataPath, balanceViaRemoval=True, verbose=True, 
                                                     normalVsAbnormal=normalVsAbnormal)


Raw Balance
----------------
benign 531
malignant 739
normal 2685
balanaceViaRemoval.avgE: 1318
balanaceViaRemoval.theshold: 1318.0

After Balancing
----------------
benign 531
malignant 739
normal 862

In [50]:
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_data, Y_data = bc.load_data(trainDataPath, trainImagePath, 
                              categories=categories,
                              maxData = maxData, 
                              verboseFreq = 50, 
                              imgResize=imgResize, 
                              thesePathos=thesePathos,
                              normalVsAbnormal=normalVsAbnormal)
print X_data.shape
print Y_data.shape


Raw Balance
----------------
benign 531
malignant 739
normal 2685
balanaceViaRemoval.avgE: 1318
balanaceViaRemoval.theshold: 1318.0

After Balancing
----------------
benign 531
malignant 739
normal 862
0.0000: C_0418_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1087_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1010_1.RIGHT_CC.LJPEG.png
0.0235: B_3065_1.RIGHT_MLO.LJPEG.png
0.0469: B_3068_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\0\C_0235_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1016_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1029_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1100_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1060_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1055_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1097_1.LEFT_CC.LJPEG.png
0.0704: B_3031_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3419_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3159_1.LEFT_CC.LJPEG.png
0.0938: A_1061_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\0\C_0247_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1029_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3169_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3484_1.RIGHT_MLO.LJPEG.png
0.1173: A_1093_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1079_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1043_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1020_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1080_1.RIGHT_MLO.LJPEG.png
0.1407: B_3481_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1004_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1017_1.LEFT_MLO.LJPEG.png
0.1642: B_3126_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1093_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1000_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3098_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1020_1.LEFT_MLO.LJPEG.png
0.1876: B_3158_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3489_1.RIGHT_MLO.LJPEG.png
0.2111: C_0342_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\0\C_0280_1.LEFT_CC.LJPEG.png
0.2345: C_0367_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1083_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1080_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3144_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1090_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3419_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\0\C_0285_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1053_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1030_1.LEFT_MLO.LJPEG.png
0.2580: C_0010_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3482_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1060_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1096_1.LEFT_MLO.LJPEG.png
0.2814: C_0034_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1075_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1087_1.RIGHT_MLO.LJPEG.png
0.3049: B_3398_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3175_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1100_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1008_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3050_1.RIGHT_MLO.LJPEG.png
0.3283: B_3012_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1035_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1079_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1011_1.LEFT_MLO.LJPEG.png
0.3518: C_0142_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1031_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3435_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1006_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1033_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1090_1.LEFT_CC.LJPEG.png
0.3752: C_0270_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3175_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1063_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1031_1.LEFT_MLO.LJPEG.png
0.3987: B_3108_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1044_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1022_1.LEFT_MLO.LJPEG.png
0.4221: C_0408_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3483_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1097_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1063_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1053_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\0\C_0246_1.LEFT_MLO.LJPEG.png
0.4456: C_0009_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1055_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\0\C_0246_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1071_1.LEFT_CC.LJPEG.png
0.4690: C_0363_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1044_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3433_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1004_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3098_1.RIGHT_CC.LJPEG.png
0.4925: C_0179_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1054_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1075_1.RIGHT_CC.LJPEG.png
0.5159: B_3391_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3490_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1016_1.LEFT_CC.LJPEG.png
0.5394: B_3457_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1017_1.LEFT_CC.LJPEG.png
(1196L, 150L, 150L)
(1196L, 1L)

In [51]:
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_test, Y_test = bc.load_data(testDataPath, imagePath, 
                              categories=categories,
                              maxData = maxData, 
                              verboseFreq = 50, 
                              imgResize=imgResize, 
                              thesePathos=thesePathos,
                              normalVsAbnormal=normalVsAbnormal)
print X_test.shape
print Y_test.shape


Raw Balance
----------------
benign 142
malignant 179
normal 658
balanaceViaRemoval.avgE: 326
balanaceViaRemoval.theshold: 326.0

After Balancing
----------------
benign 142
malignant 179
normal 215
0.0000: B_3380_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1019_1.LEFT_CC.LJPEG.png
0.0235: C_0132_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1004_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3443_1.RIGHT_CC.LJPEG.png
0.0469: A_1058_1.RIGHT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1014_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\3\B_3097_1.RIGHT_MLO.LJPEG.png
0.0704: C_0375_1.LEFT_MLO.LJPEG.png
0.0938: C_0473_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3443_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1005_1.RIGHT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1103_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1014_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\1\A_1004_1.RIGHT_MLO.LJPEG.png
0.1173: C_0323_1.LEFT_MLO.LJPEG.png
Not Found: DDSM_threshold\3\B_3097_1.RIGHT_CC.LJPEG.png
0.1407: B_3039_1.LEFT_CC.LJPEG.png
Not Found: DDSM_threshold\1\A_1037_1.RIGHT_CC.LJPEG.png
(309L, 150L, 150L)
(309L, 1L)

In [52]:
X_train = X_data
Y_train = Y_data

In [53]:
print X_train.shape
print X_test.shape

print Y_train.shape
print Y_test.shape


(1196L, 150L, 150L)
(309L, 150L, 150L)
(1196L, 1L)
(309L, 1L)

In [54]:
def yDist(y):
    bcCounts = collections.defaultdict(int)
    for a in range(0, y.shape[0]):
        bcCounts[y[a][0]] += 1
    return bcCounts

print "Y_train Dist: " + str(yDist(Y_train))
print "Y_test Dist: " + str(yDist(Y_test))


Y_train Dist: defaultdict(<type 'int'>, {1: 509, 2: 687})
Y_test Dist: defaultdict(<type 'int'>, {1: 138, 2: 171})

In [55]:
X_train_s = X_train.reshape((1196,-1))
X_test_s = X_test.reshape((309,-1))
Y_train_s = Y_train.ravel()

In [56]:
model = SVC(C=1.0, gamma=0.001, kernel='rbf')
model.fit(X_train_s,Y_train_s)


Out[56]:
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
  decision_function_shape=None, degree=3, gamma=0.001, kernel='rbf',
  max_iter=-1, probability=False, random_state=None, shrinking=True,
  tol=0.001, verbose=False)

In [57]:
predicted = model.predict(X_test_s)
expected = Y_test

In [58]:
svm_matrix = skm.confusion_matrix(Y_test, predicted)
svm_matrix


Out[58]:
array([[ 82,  56],
       [ 38, 133]])

In [59]:
print metrics.accuracy_score(expected,predicted)


0.695792880259

In [60]:
numBC = bc.reverseDict(categories)
class_names = numBC.values()
np.set_printoptions(precision=2)

# Plot non-normalized confusion matrix
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names[1:3],
                      title='Confusion Matrix without normalization')
plt.savefig('threshold_class2_bVm_o_norm.png')


Confusion matrix, without normalization
[[ 82  56]
 [ 38 133]]

In [61]:
from IPython.display import Image
Image(filename='threshold_class2_bVm_o_norm.png')


Out[61]:

In [62]:
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names[1:3], normalize=True,
                      title='Confusion Matrix with normalization')
plt.savefig('threshold_class2_bVm_norm.png')


Normalized confusion matrix
[[ 0.59  0.41]
 [ 0.22  0.78]]

In [63]:
# Load the image we just saved
from IPython.display import Image
Image(filename='threshold_class2_bVm_norm.png')


Out[63]:

In [ ]: