In [31]:
import os
import sys
import numpy as np
import keras.callbacks as cb
import keras.utils.np_utils as np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense, GaussianNoise
from keras.layers.core import Activation
from keras.constraints import maxnorm
from keras import applications # For easy loading the VGG_16 Model
from skimage import color
import sklearn.metrics as skm
import cv2
# Image loading and other helper functions
import dwdii_bc_model_helper as bc
from matplotlib import pyplot as plt

Image_Augmentation

The following function takes the 8bit grayscale images that we are using and performs a series of affine transformations to the images. There are vertical and horizontal flips along with rotations of 90, 270, 15, 30, and 45 degrees. Also included is a function for generating the rotations. Image augmentation needs to be performed before runnin the VGG_Prep function.

When calling the Image_Augmentation function setting the various flags to True will cause the transformation to be performed.


In [3]:
# Function for rotating the image files.
def Image_Rotate(img, angle):
    """
    Rotates a given image the requested angle. Returns the rotated image.
    """
    rows,cols = img.shape
    M = cv2.getRotationMatrix2D((cols/2,rows/2), angle, 1)
    return(cv2.warpAffine(img,M,(cols,rows)))

# Function for augmenting the images
def Image_Augment(X, Y, vflip=False, hflip=False, major_rotate=False, minor_rotate=False):
    """
    :param  X np.array of images
            Y np.array of labels
            vflip, hflip, major_rotate, minor_rotate set to True to perform the augmentations
    :return The set of augmented iages and their corresponding labels
    
    """
    if len(X) != len(Y):
        print('Data and Label arrays not of the same length.')
    
    n = vflip + hflip + 2*major_rotate + 6*minor_rotate
    augmented = np.zeros([len(X) + n*len(X), X.shape[1], X.shape[2]])
    label = np.zeros([len(Y) + n*len(Y), 1])
    count = 0
    for i in range(0, len(X)):
        augmented[count] = X[i]
        label[count] = Y[i]
        count += 1
        if vflip:
            aug = cv2.flip(X[i], 0)
            augmented[count] = aug
            label[count] = Y[i]
            count += 1
        if hflip:
            aug = cv2.flip(X[i], 1)
            augmented[count] = aug
            label[count] = Y[i]
            count +=1 
        if major_rotate:
            angles = [90, 270]
            for angle in angles:
                aug = Image_Rotate(X[i], angle)
                augmented[count] = aug
                label[count] = Y[i]
                count += 1
        if minor_rotate:
            angles = [-45,-30,-15,15,30,45]
            for angle in angles:
                aug = Image_Rotate(X[i], angle)
                augmented[count] = aug
                label[count] = Y[i]
                count += 1
                
    return(augmented, label)

VGG_Prep

The following function takes the 8bit grayscale images that we are using and converts them to 8bit rgb while at the same time changing the pixles to a scale of 0 to 255. These image parameters are required by the VGG_16 model.


In [4]:
def VGG_Prep(img_data):
    """
    :param img_data: training or test images of shape [#images, height, width]
    :return: the array transformed to the correct shape for the VGG network
                shape = [#images, height, width, 3] transforms to rgb and reshapes
    """
    images = np.zeros([len(img_data), img_data.shape[1], img_data.shape[2], 3])
    for i in range(0, len(img_data)):
        im = 255 - (img_data[i] * 255)  # Orginal imagnet images were not rescaled
        im = color.gray2rgb(im)
        images[i] = im
    return(images)

VGG_16 Bottleneck

The following function leverages Daniel's image loader function and performs the following:

  1. Loads in the images using the train, test, and validation csv files.
  2. Prepares the images using the VGG_Prep function
  3. Loads the VGG_16 model with the cassification layers removed.
  4. Runs each of the images for the training, test, and validation sets (if included) through the model.
  5. Saves out .npy files containing the bottleneck features from the VGG_16 model predictions and the corresponding labels.

In [5]:
def vgg16_bottleneck(trainPath, testPath, imagePath, modelPath, size, balance = True, verbose = True, 
                     verboseFreq = 50, valPath = 'None', transform = False, binary = False):
    
    categories = bc.bcNormVsAbnormNumerics()
    
    # Loading data
    metaTr, metaTr2, mCountsTr = bc.load_training_metadata(trainPath, balance, verbose)
    lenTrain = len(metaTr)
    X_train, Y_train = bc.load_data(trainPath, imagePath, maxData = lenTrain,
                                    categories=categories,
                                    verboseFreq = verboseFreq, 
                                    imgResize=size, 
                                    normalVsAbnormal=binary)
    
    metaTest, meataT2, mCountsT = bc.load_training_metadata(testPath, balance, verbose)
    lenTest = len(metaTest)
    X_test, Y_test = bc.load_data(testPath, imagePath, maxData = lenTrain, 
                                  categories=categories,
                                  verboseFreq = verboseFreq, 
                                  imgResize=size, 
                                  normalVsAbnormal=binary)
    
    if transform:
        print('Transforming the Training Data')
        X_train, Y_train = Image_Augment(X=X_train, Y=Y_train, hflip=True, vflip=True, minor_rotate=False, major_rotate=False)
    
    print('Preparing the Training Data for the VGG_16 Model.')
    X_train = VGG_Prep(X_train)
    print('Preparing the Test Data for the VGG_16 Model')
    X_test = VGG_Prep(X_test)
        
    print('Loading the VGG_16 Model')
    model = applications.VGG16(include_top=False, weights='imagenet')
        
    # Generating the bottleneck features for the training data
    print('Evaluating the VGG_16 Model on the Training Data')
    bottleneck_features_train = model.predict(X_train)
    
    # Saving the bottleneck features for the training data
    featuresTrain = os.path.join(modelPath, 'bottleneck_features_train.npy')
    labelsTrain = os.path.join(modelPath, 'labels_train.npy')
    print('Saving the Training Data Bottleneck Features.')
    np.save(open(featuresTrain, 'wb'), bottleneck_features_train)
    np.save(open(labelsTrain, 'wb'), Y_train)

    # Generating the bottleneck features for the test data
    print('Evaluating the VGG_16 Model on the Test Data')
    bottleneck_features_test = model.predict(X_test)
    
    # Saving the bottleneck features for the test data
    featuresTest = os.path.join(modelPath, 'bottleneck_features_test.npy')
    labelsTest = os.path.join(modelPath, 'labels_test.npy')
    print('Saving the Test Data Bottleneck Feaures.')
    np.save(open(featuresTest, 'wb'), bottleneck_features_test)
    np.save(open(labelsTest, 'wb'), Y_test)
    
    if valPath != 'None':
        metaVal, metaV2, mCountsV = bc.load_training_metadata(valPath, verbose = verbose, balanceViaRemoval = False)
        lenVal = len(metaVal)
        X_val, Y_val = bc.load_data(valPath, imagePath, maxData = lenVal, verboseFreq = verboseFreq, imgResize=size)
        X_val = VGG_Prep(X_val)
        
        # Generating the bottleneck features for the test data
        print('Evaluating the VGG_16 Model on the Validataion Data')
        bottleneck_features_val = model.predict(X_val)
    
        # Saving the bottleneck features for the test data
        featuresVal = os.path.join(modelPath, 'bottleneck_features_validation.npy')
        labelsVal = os.path.join(modelPath, 'labels_validation.npy')
        print('Saving the Validation Data Bottleneck Features.')
        np.save(open(featuresVal, 'wb'), bottleneck_features_val)
        np.save(open(labelsVal, 'wb'), Y_val)

Running the model on the Train, Test, and Validation Data

1) The first test is on the rescaled and squared off images maintaining aspect ratio without the artifacts removed.


In [31]:
# global variables for loading the data
imagePath = '../images/threshold/DDSM/'
trainDataPath = '../images/ddsm/ddsm_train.csv'
testDataPath = '../images/ddsm/ddsm_test.csv'
valDataPath = '../images/ddsm/ddsm_val.csv'
imgResize = (224, 224) # can go up to (224, 224)
modelPath = '../model/'

In [32]:
vgg16_bottleneck(trainDataPath, testDataPath, imagePath, modelPath, imgResize, 
                 balance = True, verbose = True, verboseFreq = 50, valPath = valDataPath, 
                 transform = False, binary = True)


Raw Balance
----------------
benign 531
malignant 739
normal 2685
balanaceViaRemoval.avgE: 1318
balanaceViaRemoval.theshold: 1318.0

After Balancing
----------------
benign 531
malignant 739
normal 862
Raw Balance
----------------
abnormal 1270
normal 2685
balanaceViaRemoval.avgE: 1977
balanaceViaRemoval.theshold: 1977.0

After Balancing
----------------
abnormal 1270
normal 1623
0.0000: A_0152_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1033_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0619_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1087_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0022_1.RIGHT_MLO.LJPEG.png
0.0235: A_1077_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0491_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0124_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1010_1.RIGHT_CC.LJPEG.png
0.0469: C_0130_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1065_1.RIGHT_MLO.LJPEG.png
0.0704: B_3435_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1020_1.RIGHT_CC.LJPEG.png
0.0938: A_0534_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0611_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0598_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0235_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1016_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1029_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3426_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1029_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1100_1.LEFT_CC.LJPEG.png
0.1173: A_0297_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1060_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1055_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1097_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0568_1.RIGHT_CC.LJPEG.png
0.1407: A_1101_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0493_1.LEFT_MLO.LJPEG.png
0.1642: B_3646_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0270_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1011_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0617_1.LEFT_MLO.LJPEG.png
0.1876: A_1071_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3419_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3159_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0341_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0490_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0500_1.RIGHT_MLO.LJPEG.png
0.2111: A_0587_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0527_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0319_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0494_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0247_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1029_1.RIGHT_CC.LJPEG.png
0.2345: B_3657_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3169_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1008_1.LEFT_CC.LJPEG.png
0.2580: C_0196_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3484_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0417_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1060_1.RIGHT_CC.LJPEG.png
0.2814: B_3120_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1079_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1043_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0448_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1020_1.LEFT_CC.LJPEG.png
0.3049: C_0321_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1080_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0492_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0483_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0605_1.RIGHT_MLO.LJPEG.png
0.3283: C_0257_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0482_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1004_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0576_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0606_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3169_1.LEFT_MLO.LJPEG.png
0.3518: C_0394_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1017_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0615_1.RIGHT_MLO.LJPEG.png
0.3752: B_3087_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1093_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0610_1.RIGHT_MLO.LJPEG.png
0.3987: B_3105_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1000_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3098_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1020_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0508_1.LEFT_CC.LJPEG.png
0.4221: B_3085_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0589_1.LEFT_MLO.LJPEG.png
0.4456: A_0124_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3489_1.RIGHT_MLO.LJPEG.png
0.4690: B_3135_1.RIGHT_CC.LJPEG.png
0.4925: B_3679_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0280_1.LEFT_CC.LJPEG.png
0.5159: A_1082_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1010_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1083_1.LEFT_CC.LJPEG.png
0.5394: B_3377_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0589_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1080_1.RIGHT_CC.LJPEG.png
0.5629: A_0323_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3144_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1090_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3419_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0285_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1053_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1030_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3482_1.RIGHT_MLO.LJPEG.png
0.5863: A_0519_1.LEFT_CC.LJPEG.png
0.6098: C_0473_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1014_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1060_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1096_1.LEFT_MLO.LJPEG.png
0.6332: A_0334_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1075_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0341_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0417_1.LEFT_MLO.LJPEG.png
0.6567: C_0411_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0542_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0491_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1029_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1087_1.RIGHT_MLO.LJPEG.png
0.6801: B_3410_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3175_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0484_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1100_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0326_1.RIGHT_CC.LJPEG.png
0.7036: B_3600_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0574_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1008_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1017_1.RIGHT_MLO.LJPEG.png
0.7270: C_0405_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3050_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1008_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1100_1.RIGHT_MLO.LJPEG.png
0.7505: A_1088_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1035_1.RIGHT_CC.LJPEG.png
0.7739: B_3047_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1079_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1011_1.LEFT_MLO.LJPEG.png
0.7974: A_0115_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1075_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1031_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0251_1.LEFT_MLO.LJPEG.png
0.8208: B_3046_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3435_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1017_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3099_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1006_1.LEFT_MLO.LJPEG.png
0.8443: B_3482_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1033_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1006_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1090_1.LEFT_CC.LJPEG.png
0.8677: A_1021_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3175_1.RIGHT_CC.LJPEG.png
0.8912: B_3058_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1063_1.LEFT_CC.LJPEG.png
0.9146: C_0214_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0479_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1031_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0245_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1044_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0490_1.RIGHT_MLO.LJPEG.png
0.9381: A_0581_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1014_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1022_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1065_1.LEFT_CC.LJPEG.png
0.9615: C_0200_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3483_1.RIGHT_MLO.LJPEG.png
0.9850: C_0025_1.RIGHT_CC.LJPEG.png
Raw Balance
----------------
benign 142
malignant 179
normal 658
balanaceViaRemoval.avgE: 326
balanaceViaRemoval.theshold: 326.0

After Balancing
----------------
benign 142
malignant 179
normal 215
Raw Balance
----------------
abnormal 321
normal 658
balanaceViaRemoval.avgE: 489
balanaceViaRemoval.theshold: 489.0

After Balancing
----------------
abnormal 321
normal 405
0.0000: A_1105_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1043_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1053_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3495_1.RIGHT_MLO.LJPEG.png
0.0235: A_1092_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1019_1.LEFT_CC.LJPEG.png
0.0469: A_0042_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0618_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1004_1.RIGHT_CC.LJPEG.png
0.0704: A_1043_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1044_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3443_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0284_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0601_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0299_1.LEFT_CC.LJPEG.png
0.0938: C_0458_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0387_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0303_1.RIGHT_CC.LJPEG.png
0.1173: A_0280_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1014_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3097_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0491_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3424_1.RIGHT_CC.LJPEG.png
0.1407: B_3609_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3495_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0704_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0610_1.LEFT_MLO.LJPEG.png
0.1642: B_3053_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1074_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1040_1.LEFT_CC.LJPEG.png
0.1876: C_0212_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0562_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1000_1.RIGHT_MLO.LJPEG.png
0.2111: A_0513_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3443_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1040_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1005_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1103_1.LEFT_CC.LJPEG.png
0.2345: A_0711_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1014_1.LEFT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1031_1.RIGHT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0527_1.LEFT_CC.LJPEG.png
0.2580: B_3013_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1004_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3097_1.RIGHT_CC.LJPEG.png
0.2814: C_0145_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\C_0278_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3094_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0491_1.LEFT_CC.LJPEG.png
0.3049: B_3152_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1053_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/1\A_1037_1.RIGHT_CC.LJPEG.png
Preparing the Training Data for the VGG_16 Model.
Preparing the Test Data for the VGG_16 Model
Loading the VGG_16 Model
Evaluating the VGG_16 Model on the Training Data
Saving the Training Data Bottleneck Features.
Evaluating the VGG_16 Model on the Test Data
Saving the Test Data Bottleneck Feaures.
Raw Balance
----------------
benign 18
malignant 34
normal 142
Raw Balance
----------------
benign 18
malignant 34
normal 142
balanaceViaRemoval.avgE: 64
balanaceViaRemoval.theshold: 64.0

After Balancing
----------------
benign 18
malignant 34
normal 38
0.0000: C_0062_1.LEFT_CC.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0601_1.LEFT_CC.LJPEG.png
0.2577: C_0049_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/3\B_3407_1.RIGHT_MLO.LJPEG.png
Not Found: ../images/threshold/DDSM/0\A_0268_1.LEFT_MLO.LJPEG.png
Evaluating the VGG_16 Model on the Validataion Data
Saving the Validation Data Bottleneck Features.

In [6]:
class LossHistory(cb.Callback):
    def on_train_begin(self, logs={}):
        self.losses = []

    def on_batch_end(self, batch, logs={}):
        batch_loss = logs.get('loss')
        self.losses.append(batch_loss)

Train Top Model

This function takes the bottleneck features from the bottleneck function and applies a shallow CNN to these features to classify the images. The function needs to be pointed at the locations of the training and test features along with the training and test labels. You can use the epoch and batch size variables to control the number of images to show to the model and the number of training epochs. The model save variabler alows for saving of the final model weights.


In [36]:
def train_top_model(train_feats, train_lab, test_feats, test_lab, model_path, model_save, epoch = 50, batch = 64):
    train_bottleneck = os.path.join(model_path, train_feats)
    train_labels = os.path.join(model_path, train_lab)
    test_bottleneck = os.path.join(model_path, test_feats)
    test_labels = os.path.join(model_path, test_lab)
    
    history = LossHistory()
    
    X_train = np.load(train_bottleneck)
    Y_train = np.load(train_labels)
    #Y_train = np_utils.to_categorical(Y_train, nb_classes=3)
    Y_train = np_utils.to_categorical(Y_train, nb_classes=2)
    
    X_test = np.load(test_bottleneck)
    Y_test = np.load(test_labels)
    #Y_test = np_utils.to_categorical(Y_test, nb_classes=3)
    Y_test = np_utils.to_categorical(Y_test, nb_classes=2)
    print(X_train.shape)
    
    noise = 0.01
    
    model = Sequential()
    model.add( GaussianNoise(noise, input_shape=X_train.shape[1:]))
    model.add(Flatten(input_shape=X_train.shape[1:]))
    model.add(Dropout(0.7))
    model.add( Dense(256, activation = 'relu') )
    model.add(Dropout(0.5))
    #model.add(Dense(3))
    model.add(Dense(2))
    model.add(Activation('softmax'))
    #loss = 'categorical_crossentropy'
    model.compile(optimizer='adadelta',
                  loss='categorical_crossentropy', 
                  metrics=['accuracy'])

    model.fit(X_train, Y_train,
              nb_epoch=epoch,
              batch_size=batch,
              callbacks=[history],
              validation_data=(X_test, Y_test),
              verbose=2)
    
    score = model.evaluate(X_test, Y_test, batch_size=16, verbose=0)

    print "Network's test score [loss, accuracy]: {0}".format(score)
    
    model.save_weights(os.path.join(model_path, model_save))

Confusion Matrix

The function below takes a data set that has been run through the VGG16 model, the corresponding labels, and a pre-trained weights file and creates a confusion matrix using Daniel's helper function.


In [22]:
def cf_Matrix(data, label, weights, path, save):
    data = os.path.join(path, data)
    label = os.path.join(path, label)
    categories = bc.bcNormVsAbnormNumerics()
    
    X = np.load(data)
    Y = np.load(label)
    #Y = np_utils.to_categorical(Y, nb_classes=3)
    
    # Loading and preping the model
    model = Sequential()
    model.add(Flatten(input_shape=X.shape[1:]))
    model.add(Dropout(0.7))
    
    model.add(Dense(256))
    model.add(Activation('relu'), constraint= maxnorm(3.))
    model.add(Dropout(0.5))
    
    #model.add(Dense(3))
    model.add(Dense(2))
    model.add(Activation('softmax'))
    
    model.load_weights(os.path.join('../model/', weights))
    
    # try Adadelta and Adam
    model.compile(optimizer='adadelta',
                  loss='categorical_crossentropy', 
                  metrics=['accuracy'])
    
    predictOutput = model.predict(X, batch_size=64, verbose=2)
    #numBC = bc.numericBC()
    numBC = bc.reverseDict(categories)
    
    predClasses = []
    for i in range(len(predictOutput)):
        arPred = np.array(predictOutput[i])
        predictionProb = arPred.max()
        predictionNdx = arPred.argmax()
        predClassName = numBC[predictionNdx]
        predClasses.append(predictionNdx)
        
    # Use sklearn's helper method to generate the confusion matrix
    cnf_matrix = skm.confusion_matrix(Y, predClasses)
    
    # Ploting the confusion matrix
    class_names = numBC.values()
    np.set_printoptions(precision=2)
    
    fileCfMatrix = '../figures/confusion_matrix-' + save + '.png'
    plt.figure()
    bc.plot_confusion_matrix(cnf_matrix, classes=class_names,
                             title='Confusion matrix, \n' + save)
    plt.savefig(fileCfMatrix)
    plt.show()

Running the Top Model

The following runs the top model classifier on the bottleneck features.


In [6]:
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train.npy'
train_labels = 'labels_train.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights02.h5'

In [10]:
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
                model_path=modelPath, model_save=top_model_weights_path)


Train on 2132 samples, validate on 536 samples
Epoch 1/50
0s - loss: 6.5851 - acc: 0.4508 - val_loss: 5.6644 - val_acc: 0.4571
Epoch 2/50
0s - loss: 3.4335 - acc: 0.5563 - val_loss: 2.2755 - val_acc: 0.5000
Epoch 3/50
0s - loss: 1.0971 - acc: 0.6393 - val_loss: 1.2428 - val_acc: 0.5243
Epoch 4/50
0s - loss: 0.6560 - acc: 0.7317 - val_loss: 1.3331 - val_acc: 0.4963
Epoch 5/50
0s - loss: 0.5417 - acc: 0.7767 - val_loss: 1.3829 - val_acc: 0.5019
Epoch 6/50
0s - loss: 0.4130 - acc: 0.8354 - val_loss: 1.4304 - val_acc: 0.5392
Epoch 7/50
0s - loss: 0.3163 - acc: 0.8752 - val_loss: 1.5420 - val_acc: 0.5187
Epoch 8/50
0s - loss: 0.2728 - acc: 0.8963 - val_loss: 1.5069 - val_acc: 0.5168
Epoch 9/50
0s - loss: 0.2050 - acc: 0.9282 - val_loss: 1.6816 - val_acc: 0.5243
Epoch 10/50
0s - loss: 0.1788 - acc: 0.9273 - val_loss: 1.6940 - val_acc: 0.5485
Epoch 11/50
0s - loss: 0.1626 - acc: 0.9465 - val_loss: 1.7204 - val_acc: 0.5299
Epoch 12/50
0s - loss: 0.1236 - acc: 0.9578 - val_loss: 1.7611 - val_acc: 0.5243
Epoch 13/50
0s - loss: 0.1066 - acc: 0.9662 - val_loss: 1.8645 - val_acc: 0.5112
Epoch 14/50
0s - loss: 0.0929 - acc: 0.9662 - val_loss: 1.8626 - val_acc: 0.5280
Epoch 15/50
0s - loss: 0.0764 - acc: 0.9784 - val_loss: 1.8900 - val_acc: 0.5373
Epoch 16/50
1s - loss: 0.0669 - acc: 0.9761 - val_loss: 1.8557 - val_acc: 0.5317
Epoch 17/50
0s - loss: 0.0542 - acc: 0.9878 - val_loss: 1.9861 - val_acc: 0.5448
Epoch 18/50
0s - loss: 0.0424 - acc: 0.9892 - val_loss: 2.0198 - val_acc: 0.5578
Epoch 19/50
0s - loss: 0.0433 - acc: 0.9883 - val_loss: 2.0364 - val_acc: 0.5448
Epoch 20/50
0s - loss: 0.0405 - acc: 0.9892 - val_loss: 2.5172 - val_acc: 0.5131
Epoch 21/50
0s - loss: 0.0325 - acc: 0.9911 - val_loss: 2.1068 - val_acc: 0.5578
Epoch 22/50
0s - loss: 0.0312 - acc: 0.9916 - val_loss: 2.3122 - val_acc: 0.5522
Epoch 23/50
0s - loss: 0.0297 - acc: 0.9906 - val_loss: 2.2354 - val_acc: 0.5373
Epoch 24/50
0s - loss: 0.0248 - acc: 0.9930 - val_loss: 2.2286 - val_acc: 0.5634
Epoch 25/50
0s - loss: 0.0233 - acc: 0.9934 - val_loss: 2.3734 - val_acc: 0.5373
Epoch 26/50
0s - loss: 0.0172 - acc: 0.9972 - val_loss: 2.3186 - val_acc: 0.5541
Epoch 27/50
0s - loss: 0.0207 - acc: 0.9958 - val_loss: 2.4228 - val_acc: 0.5522
Epoch 28/50
0s - loss: 0.0241 - acc: 0.9944 - val_loss: 2.3224 - val_acc: 0.5522
Epoch 29/50
0s - loss: 0.0160 - acc: 0.9958 - val_loss: 2.4128 - val_acc: 0.5392
Epoch 30/50
0s - loss: 0.0116 - acc: 0.9967 - val_loss: 2.2848 - val_acc: 0.5616
Epoch 31/50
0s - loss: 0.0099 - acc: 0.9981 - val_loss: 2.4636 - val_acc: 0.5560
Epoch 32/50
1s - loss: 0.0078 - acc: 0.9986 - val_loss: 2.5315 - val_acc: 0.5709
Epoch 33/50
0s - loss: 0.0087 - acc: 0.9986 - val_loss: 2.4766 - val_acc: 0.5485
Epoch 34/50
0s - loss: 0.0221 - acc: 0.9934 - val_loss: 2.7372 - val_acc: 0.5429
Epoch 35/50
0s - loss: 0.0203 - acc: 0.9939 - val_loss: 2.4145 - val_acc: 0.5466
Epoch 36/50
0s - loss: 0.0121 - acc: 0.9986 - val_loss: 2.5365 - val_acc: 0.5485
Epoch 37/50
0s - loss: 0.0139 - acc: 0.9962 - val_loss: 2.3810 - val_acc: 0.5765
Epoch 38/50
1s - loss: 0.0089 - acc: 0.9986 - val_loss: 2.5223 - val_acc: 0.5541
Epoch 39/50
0s - loss: 0.0157 - acc: 0.9958 - val_loss: 2.6310 - val_acc: 0.5616
Epoch 40/50
1s - loss: 0.0083 - acc: 0.9972 - val_loss: 2.5573 - val_acc: 0.5597
Epoch 41/50
1s - loss: 0.0067 - acc: 0.9991 - val_loss: 2.6300 - val_acc: 0.5653
Epoch 42/50
1s - loss: 0.0034 - acc: 1.0000 - val_loss: 2.6698 - val_acc: 0.5672
Epoch 43/50
1s - loss: 0.0072 - acc: 0.9977 - val_loss: 2.6886 - val_acc: 0.5690
Epoch 44/50
1s - loss: 0.0131 - acc: 0.9972 - val_loss: 2.6070 - val_acc: 0.5653
Epoch 45/50
1s - loss: 0.0047 - acc: 0.9995 - val_loss: 2.5764 - val_acc: 0.5802
Epoch 46/50
1s - loss: 0.0080 - acc: 0.9977 - val_loss: 2.6428 - val_acc: 0.5634
Epoch 47/50
1s - loss: 0.0045 - acc: 0.9995 - val_loss: 2.7454 - val_acc: 0.5802
Epoch 48/50
1s - loss: 0.0059 - acc: 0.9981 - val_loss: 2.5667 - val_acc: 0.5840
Epoch 49/50
1s - loss: 0.0091 - acc: 0.9967 - val_loss: 2.6342 - val_acc: 0.5672
Epoch 50/50
1s - loss: 0.0046 - acc: 0.9995 - val_loss: 2.7544 - val_acc: 0.5914
Network's test score [loss, accuracy]: [2.7544217768000134, 0.59141791044776115]

In [37]:
feats_loc = '150_test_val/bottleneck_features_test.npy'
feats_labs = '150_test_val/labels_test.npy'
weight = 'balanced150run2/top_weights02.h5'
saveFile = 'balanced150'

In [38]:
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)


Confusion matrix, without normalization
[[136  34  45]
 [ 25  91  26]
 [ 45  37  97]]

Running the Top Model on the Fully Augmented Data

In this run we will be using the bottleneck features from taking the training data and augmenting it with the followin transformations; Vertical Flip, Horizontal Flip, 90 and 270 degree rotation, and 15, 30, and 45 degree rotation in both directions.


In [16]:
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_150fulltrans_train.npy'
train_labels = 'labels_150fulltrans_train.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_150fulltrans.h5'

In [17]:
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
                model_path=modelPath, model_save=top_model_weights_path)


Train on 23452 samples, validate on 536 samples
Epoch 1/50
9s - loss: 2.1150 - acc: 0.5010 - val_loss: 0.9707 - val_acc: 0.5429
Epoch 2/50
9s - loss: 0.8477 - acc: 0.6011 - val_loss: 1.0201 - val_acc: 0.5690
Epoch 3/50
9s - loss: 0.7545 - acc: 0.6495 - val_loss: 1.0338 - val_acc: 0.5709
Epoch 4/50
10s - loss: 0.6726 - acc: 0.6937 - val_loss: 1.1553 - val_acc: 0.5634
Epoch 5/50
12s - loss: 0.6003 - acc: 0.7329 - val_loss: 1.1349 - val_acc: 0.5709
Epoch 6/50
13s - loss: 0.5381 - acc: 0.7633 - val_loss: 1.2890 - val_acc: 0.5784
Epoch 7/50
13s - loss: 0.4795 - acc: 0.7914 - val_loss: 1.3350 - val_acc: 0.5728
Epoch 8/50
13s - loss: 0.4376 - acc: 0.8142 - val_loss: 1.3227 - val_acc: 0.5877
Epoch 9/50
12s - loss: 0.3839 - acc: 0.8383 - val_loss: 1.4640 - val_acc: 0.5560
Epoch 10/50
12s - loss: 0.3497 - acc: 0.8528 - val_loss: 1.5536 - val_acc: 0.5765
Epoch 11/50
12s - loss: 0.3168 - acc: 0.8711 - val_loss: 1.6186 - val_acc: 0.5784
Epoch 12/50
12s - loss: 0.2859 - acc: 0.8848 - val_loss: 1.6248 - val_acc: 0.5653
Epoch 13/50
12s - loss: 0.2593 - acc: 0.8979 - val_loss: 1.7318 - val_acc: 0.5765
Epoch 14/50
12s - loss: 0.2411 - acc: 0.9047 - val_loss: 1.8024 - val_acc: 0.5746
Epoch 15/50
12s - loss: 0.2157 - acc: 0.9148 - val_loss: 1.8175 - val_acc: 0.6101
Epoch 16/50
12s - loss: 0.2096 - acc: 0.9177 - val_loss: 1.9431 - val_acc: 0.5802
Epoch 17/50
12s - loss: 0.1886 - acc: 0.9273 - val_loss: 2.0001 - val_acc: 0.5709
Epoch 18/50
12s - loss: 0.1718 - acc: 0.9351 - val_loss: 1.9693 - val_acc: 0.5765
Epoch 19/50
12s - loss: 0.1730 - acc: 0.9361 - val_loss: 2.0154 - val_acc: 0.5802
Epoch 20/50
12s - loss: 0.1525 - acc: 0.9414 - val_loss: 2.1347 - val_acc: 0.5504
Epoch 21/50
12s - loss: 0.1443 - acc: 0.9461 - val_loss: 2.1748 - val_acc: 0.5933
Epoch 22/50
12s - loss: 0.1428 - acc: 0.9473 - val_loss: 2.2186 - val_acc: 0.5896
Epoch 23/50
12s - loss: 0.1324 - acc: 0.9505 - val_loss: 2.2057 - val_acc: 0.5522
Epoch 24/50
12s - loss: 0.1307 - acc: 0.9520 - val_loss: 2.2344 - val_acc: 0.5728
Epoch 25/50
12s - loss: 0.1166 - acc: 0.9571 - val_loss: 2.3347 - val_acc: 0.5765
Epoch 26/50
12s - loss: 0.1163 - acc: 0.9586 - val_loss: 2.3327 - val_acc: 0.5728
Epoch 27/50
12s - loss: 0.1117 - acc: 0.9623 - val_loss: 2.3773 - val_acc: 0.5728
Epoch 28/50
12s - loss: 0.1153 - acc: 0.9609 - val_loss: 2.3647 - val_acc: 0.5858
Epoch 29/50
12s - loss: 0.1057 - acc: 0.9634 - val_loss: 2.4121 - val_acc: 0.5504
Epoch 30/50
12s - loss: 0.1045 - acc: 0.9645 - val_loss: 2.5405 - val_acc: 0.5690
Epoch 31/50
12s - loss: 0.1051 - acc: 0.9655 - val_loss: 2.4535 - val_acc: 0.5709
Epoch 32/50
12s - loss: 0.0940 - acc: 0.9667 - val_loss: 2.5613 - val_acc: 0.5578
Epoch 33/50
12s - loss: 0.0979 - acc: 0.9653 - val_loss: 2.5431 - val_acc: 0.5858
Epoch 34/50
12s - loss: 0.0877 - acc: 0.9693 - val_loss: 2.6461 - val_acc: 0.5765
Epoch 35/50
12s - loss: 0.0901 - acc: 0.9697 - val_loss: 2.7325 - val_acc: 0.5522
Epoch 36/50
12s - loss: 0.0879 - acc: 0.9705 - val_loss: 2.7409 - val_acc: 0.5634
Epoch 37/50
12s - loss: 0.0861 - acc: 0.9707 - val_loss: 2.6940 - val_acc: 0.5765
Epoch 38/50
12s - loss: 0.0868 - acc: 0.9715 - val_loss: 2.6151 - val_acc: 0.5672
Epoch 39/50
12s - loss: 0.0818 - acc: 0.9726 - val_loss: 2.5774 - val_acc: 0.5765
Epoch 40/50
12s - loss: 0.0830 - acc: 0.9732 - val_loss: 2.5922 - val_acc: 0.5616
Epoch 41/50
12s - loss: 0.0789 - acc: 0.9748 - val_loss: 2.8912 - val_acc: 0.5672
Epoch 42/50
12s - loss: 0.0794 - acc: 0.9736 - val_loss: 2.7895 - val_acc: 0.5821
Epoch 43/50
12s - loss: 0.0727 - acc: 0.9758 - val_loss: 3.0036 - val_acc: 0.5578
Epoch 44/50
12s - loss: 0.0768 - acc: 0.9745 - val_loss: 2.8650 - val_acc: 0.5578
Epoch 45/50
12s - loss: 0.0734 - acc: 0.9767 - val_loss: 2.7810 - val_acc: 0.5634
Epoch 46/50
12s - loss: 0.0758 - acc: 0.9768 - val_loss: 2.9774 - val_acc: 0.5485
Epoch 47/50
12s - loss: 0.0740 - acc: 0.9764 - val_loss: 2.8870 - val_acc: 0.5653
Epoch 48/50
12s - loss: 0.0725 - acc: 0.9761 - val_loss: 2.9791 - val_acc: 0.5560
Epoch 49/50
12s - loss: 0.0690 - acc: 0.9773 - val_loss: 2.8205 - val_acc: 0.5728
Epoch 50/50
12s - loss: 0.0688 - acc: 0.9786 - val_loss: 2.8448 - val_acc: 0.5728
Network's test score [loss, accuracy]: [2.8447668943832172, 0.57276119402985071]

In [35]:
feats_loc = '150_test_val/bottleneck_features_test.npy'
feats_labs = '150_test_val/labels_test.npy'
weight = 'balanced150FullTrans/top_weights_150fulltrans.h5'
saveFile = 'balanced150FullTrans'

In [36]:
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)


Confusion matrix, without normalization
[[141  28  46]
 [ 26  76  40]
 [ 57  32  90]]

Running the Top Model at 224x224

In this next experiment we run the model with transformations on the data at a size of 224x224


In [22]:
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224.npy'
train_labels = 'labels_train_224.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224.h5'

In [23]:
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
                model_path=modelPath, model_save=top_model_weights_path)


Train on 2132 samples, validate on 536 samples
Epoch 1/50
2s - loss: 9.0933 - acc: 0.4170 - val_loss: 8.1122 - val_acc: 0.4608
Epoch 2/50
2s - loss: 8.1198 - acc: 0.4634 - val_loss: 9.5227 - val_acc: 0.4030
Epoch 3/50
2s - loss: 7.4928 - acc: 0.5033 - val_loss: 7.4591 - val_acc: 0.4776
Epoch 4/50
2s - loss: 6.3570 - acc: 0.5432 - val_loss: 6.4214 - val_acc: 0.5466
Epoch 5/50
2s - loss: 5.6562 - acc: 0.5943 - val_loss: 5.6958 - val_acc: 0.5634
Epoch 6/50
2s - loss: 4.9501 - acc: 0.6196 - val_loss: 6.8575 - val_acc: 0.5168
Epoch 7/50
2s - loss: 4.3462 - acc: 0.6384 - val_loss: 7.0712 - val_acc: 0.4571
Epoch 8/50
2s - loss: 2.2618 - acc: 0.6731 - val_loss: 1.3050 - val_acc: 0.5914
Epoch 9/50
2s - loss: 0.7706 - acc: 0.7233 - val_loss: 1.1151 - val_acc: 0.5765
Epoch 10/50
2s - loss: 0.5902 - acc: 0.7617 - val_loss: 1.1186 - val_acc: 0.6101
Epoch 11/50
2s - loss: 0.4783 - acc: 0.8096 - val_loss: 1.2259 - val_acc: 0.6194
Epoch 12/50
2s - loss: 0.3681 - acc: 0.8462 - val_loss: 1.3938 - val_acc: 0.5933
Epoch 13/50
2s - loss: 0.3200 - acc: 0.8818 - val_loss: 1.3543 - val_acc: 0.5970
Epoch 14/50
2s - loss: 0.2700 - acc: 0.8987 - val_loss: 1.4228 - val_acc: 0.6213
Epoch 15/50
2s - loss: 0.2238 - acc: 0.9146 - val_loss: 1.4641 - val_acc: 0.5597
Epoch 16/50
2s - loss: 0.1715 - acc: 0.9353 - val_loss: 1.5231 - val_acc: 0.6101
Epoch 17/50
2s - loss: 0.1404 - acc: 0.9522 - val_loss: 1.7135 - val_acc: 0.5933
Epoch 18/50
2s - loss: 0.1347 - acc: 0.9461 - val_loss: 1.8070 - val_acc: 0.5970
Epoch 19/50
2s - loss: 0.1140 - acc: 0.9597 - val_loss: 1.6378 - val_acc: 0.6138
Epoch 20/50
2s - loss: 0.1024 - acc: 0.9658 - val_loss: 2.0474 - val_acc: 0.5877
Epoch 21/50
2s - loss: 0.0892 - acc: 0.9658 - val_loss: 2.0427 - val_acc: 0.6119
Epoch 22/50
2s - loss: 0.0864 - acc: 0.9719 - val_loss: 1.9937 - val_acc: 0.5653
Epoch 23/50
2s - loss: 0.0682 - acc: 0.9761 - val_loss: 2.4891 - val_acc: 0.5784
Epoch 24/50
2s - loss: 0.0682 - acc: 0.9747 - val_loss: 1.9282 - val_acc: 0.6119
Epoch 25/50
2s - loss: 0.0435 - acc: 0.9850 - val_loss: 2.1998 - val_acc: 0.6007
Epoch 26/50
2s - loss: 0.0445 - acc: 0.9850 - val_loss: 1.9286 - val_acc: 0.6250
Epoch 27/50
2s - loss: 0.0521 - acc: 0.9845 - val_loss: 2.3834 - val_acc: 0.5634
Epoch 28/50
2s - loss: 0.0627 - acc: 0.9784 - val_loss: 1.9833 - val_acc: 0.6231
Epoch 29/50
2s - loss: 0.0419 - acc: 0.9850 - val_loss: 2.2553 - val_acc: 0.5877
Epoch 30/50
2s - loss: 0.0302 - acc: 0.9902 - val_loss: 2.2751 - val_acc: 0.6138
Epoch 31/50
2s - loss: 0.0301 - acc: 0.9897 - val_loss: 2.2381 - val_acc: 0.6082
Epoch 32/50
2s - loss: 0.0190 - acc: 0.9958 - val_loss: 2.3391 - val_acc: 0.6119
Epoch 33/50
2s - loss: 0.0286 - acc: 0.9892 - val_loss: 2.2304 - val_acc: 0.6026
Epoch 34/50
2s - loss: 0.0315 - acc: 0.9883 - val_loss: 2.4235 - val_acc: 0.6119
Epoch 35/50
2s - loss: 0.0368 - acc: 0.9883 - val_loss: 2.3665 - val_acc: 0.6213
Epoch 36/50
2s - loss: 0.0132 - acc: 0.9958 - val_loss: 2.4955 - val_acc: 0.6381
Epoch 37/50
2s - loss: 0.0270 - acc: 0.9911 - val_loss: 2.6176 - val_acc: 0.5933
Epoch 38/50
2s - loss: 0.0335 - acc: 0.9864 - val_loss: 2.3849 - val_acc: 0.6250
Epoch 39/50
2s - loss: 0.0157 - acc: 0.9953 - val_loss: 2.4454 - val_acc: 0.6287
Epoch 40/50
2s - loss: 0.0160 - acc: 0.9934 - val_loss: 2.6720 - val_acc: 0.6119
Epoch 41/50
3s - loss: 0.0183 - acc: 0.9962 - val_loss: 2.6914 - val_acc: 0.5989
Epoch 42/50
3s - loss: 0.0230 - acc: 0.9930 - val_loss: 2.5981 - val_acc: 0.6082
Epoch 43/50
3s - loss: 0.0173 - acc: 0.9948 - val_loss: 2.8000 - val_acc: 0.5951
Epoch 44/50
3s - loss: 0.0182 - acc: 0.9930 - val_loss: 3.0679 - val_acc: 0.6007
Epoch 45/50
3s - loss: 0.0205 - acc: 0.9920 - val_loss: 2.7439 - val_acc: 0.6082
Epoch 46/50
3s - loss: 0.0137 - acc: 0.9953 - val_loss: 2.8064 - val_acc: 0.5896
Epoch 47/50
3s - loss: 0.0142 - acc: 0.9948 - val_loss: 2.5329 - val_acc: 0.5951
Epoch 48/50
3s - loss: 0.0188 - acc: 0.9930 - val_loss: 2.6906 - val_acc: 0.6082
Epoch 49/50
3s - loss: 0.0248 - acc: 0.9916 - val_loss: 2.6263 - val_acc: 0.6194
Epoch 50/50
4s - loss: 0.0119 - acc: 0.9944 - val_loss: 2.7021 - val_acc: 0.5989
Network's test score [loss, accuracy]: [2.7020884663311402, 0.59888059701492535]

Generating the Confusion Matrix for the Balanced 224x224 Run


In [25]:
feats_loc = '224_test_val/bottleneck_features_test.npy'
feats_labs = '224_test_val/labels_test.npy'
weight = 'balanced224/top_weights_224.h5'
saveFile = 'balanced224'

In [34]:
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)


Confusion matrix, without normalization
[[142  27  46]
 [ 33  84  25]
 [ 60  24  95]]

224x224 With Flips


In [5]:
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'Balanced224flips/bottleneck_features_train_224flip.npy'
train_labels = 'Balanced224flips/labels_train_224flip.npy'
test_bottleneck = '224_test_val/bottleneck_features_test.npy'
test_labels = '224_test_val/labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'Balanced224flips/top_weights_224flip.h5'

In [10]:
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
                model_path=modelPath, model_save=top_model_weights_path)


Train on 6396 samples, validate on 536 samples
Epoch 1/50
9s - loss: 5.5444 - acc: 0.4927 - val_loss: 1.2293 - val_acc: 0.5336
Epoch 2/50
9s - loss: 0.9085 - acc: 0.5758 - val_loss: 1.0227 - val_acc: 0.5765
Epoch 3/50
9s - loss: 0.8059 - acc: 0.6254 - val_loss: 1.0468 - val_acc: 0.5877
Epoch 4/50
9s - loss: 0.7276 - acc: 0.6645 - val_loss: 1.1345 - val_acc: 0.5765
Epoch 5/50
9s - loss: 0.6415 - acc: 0.7065 - val_loss: 1.1480 - val_acc: 0.5821
Epoch 6/50
9s - loss: 0.5666 - acc: 0.7461 - val_loss: 1.2448 - val_acc: 0.5933
Epoch 7/50
9s - loss: 0.5022 - acc: 0.7824 - val_loss: 1.3077 - val_acc: 0.5821
Epoch 8/50
9s - loss: 0.4410 - acc: 0.8114 - val_loss: 1.4826 - val_acc: 0.6063
Epoch 9/50
9s - loss: 0.3718 - acc: 0.8438 - val_loss: 1.5838 - val_acc: 0.5951
Epoch 10/50
9s - loss: 0.3322 - acc: 0.8604 - val_loss: 1.4636 - val_acc: 0.6231
Epoch 11/50
10s - loss: 0.2835 - acc: 0.8816 - val_loss: 1.5400 - val_acc: 0.6306
Epoch 12/50
10s - loss: 0.2514 - acc: 0.8960 - val_loss: 1.8208 - val_acc: 0.6026
Epoch 13/50
10s - loss: 0.2303 - acc: 0.9137 - val_loss: 1.6765 - val_acc: 0.6119
Epoch 14/50
10s - loss: 0.2010 - acc: 0.9220 - val_loss: 1.9095 - val_acc: 0.6101
Epoch 15/50
12s - loss: 0.1709 - acc: 0.9303 - val_loss: 1.8258 - val_acc: 0.5877
Epoch 16/50
13s - loss: 0.1508 - acc: 0.9442 - val_loss: 1.8652 - val_acc: 0.5877
Epoch 17/50
14s - loss: 0.1383 - acc: 0.9457 - val_loss: 2.0475 - val_acc: 0.5933
Epoch 18/50
14s - loss: 0.1218 - acc: 0.9550 - val_loss: 2.2270 - val_acc: 0.6082
Epoch 19/50
14s - loss: 0.1210 - acc: 0.9590 - val_loss: 2.0682 - val_acc: 0.5840
Epoch 20/50
14s - loss: 0.1054 - acc: 0.9609 - val_loss: 2.1913 - val_acc: 0.6213
Epoch 21/50
15s - loss: 0.0986 - acc: 0.9628 - val_loss: 2.2742 - val_acc: 0.5914
Epoch 22/50
14s - loss: 0.0802 - acc: 0.9697 - val_loss: 2.1924 - val_acc: 0.5933
Epoch 23/50
15s - loss: 0.0851 - acc: 0.9684 - val_loss: 2.3662 - val_acc: 0.6063
Epoch 24/50
16s - loss: 0.0828 - acc: 0.9730 - val_loss: 2.4260 - val_acc: 0.6026
Epoch 25/50
16s - loss: 0.0743 - acc: 0.9761 - val_loss: 2.4058 - val_acc: 0.6045
Epoch 26/50
19s - loss: 0.0575 - acc: 0.9800 - val_loss: 2.3726 - val_acc: 0.6175
Epoch 27/50
19s - loss: 0.0715 - acc: 0.9767 - val_loss: 2.3359 - val_acc: 0.6026
Epoch 28/50
21s - loss: 0.0545 - acc: 0.9789 - val_loss: 2.4690 - val_acc: 0.6082
Epoch 29/50
17s - loss: 0.0438 - acc: 0.9853 - val_loss: 2.6190 - val_acc: 0.6063
Epoch 30/50
16s - loss: 0.0570 - acc: 0.9806 - val_loss: 2.4797 - val_acc: 0.6063
Epoch 31/50
19s - loss: 0.0445 - acc: 0.9856 - val_loss: 2.8046 - val_acc: 0.5877
Epoch 32/50
19s - loss: 0.0529 - acc: 0.9811 - val_loss: 2.4910 - val_acc: 0.6045
Epoch 33/50
15s - loss: 0.0477 - acc: 0.9844 - val_loss: 2.4460 - val_acc: 0.6437
Epoch 34/50
15s - loss: 0.0345 - acc: 0.9884 - val_loss: 2.7882 - val_acc: 0.5802
Epoch 35/50
15s - loss: 0.0394 - acc: 0.9862 - val_loss: 2.5223 - val_acc: 0.6269
Epoch 36/50
15s - loss: 0.0428 - acc: 0.9867 - val_loss: 2.5540 - val_acc: 0.6194
Epoch 37/50
14s - loss: 0.0330 - acc: 0.9891 - val_loss: 2.7204 - val_acc: 0.6063
Epoch 38/50
15s - loss: 0.0454 - acc: 0.9847 - val_loss: 2.5845 - val_acc: 0.5951
Epoch 39/50
15s - loss: 0.0353 - acc: 0.9872 - val_loss: 2.8707 - val_acc: 0.6101
Epoch 40/50
15s - loss: 0.0397 - acc: 0.9866 - val_loss: 2.6724 - val_acc: 0.6045
Epoch 41/50
15s - loss: 0.0366 - acc: 0.9873 - val_loss: 2.7749 - val_acc: 0.5933
Epoch 42/50
15s - loss: 0.0320 - acc: 0.9889 - val_loss: 2.8324 - val_acc: 0.6101
Epoch 43/50
15s - loss: 0.0236 - acc: 0.9923 - val_loss: 2.7709 - val_acc: 0.6287
Epoch 44/50
15s - loss: 0.0313 - acc: 0.9898 - val_loss: 2.7679 - val_acc: 0.6119
Epoch 45/50
15s - loss: 0.0249 - acc: 0.9908 - val_loss: 2.7865 - val_acc: 0.6194
Epoch 46/50
15s - loss: 0.0299 - acc: 0.9914 - val_loss: 2.8752 - val_acc: 0.6418
Epoch 47/50
15s - loss: 0.0322 - acc: 0.9892 - val_loss: 2.8893 - val_acc: 0.6157
Epoch 48/50
15s - loss: 0.0334 - acc: 0.9872 - val_loss: 2.9702 - val_acc: 0.5970
Epoch 49/50
16s - loss: 0.0279 - acc: 0.9912 - val_loss: 2.8605 - val_acc: 0.6138
Epoch 50/50
15s - loss: 0.0351 - acc: 0.9898 - val_loss: 2.9512 - val_acc: 0.6157
Network's test score [loss, accuracy]: [2.9512313924618621, 0.61567164179104472]

In [11]:
feats_loc = '224_test_val/bottleneck_features_test.npy'
feats_labs = '224_test_val/labels_test.npy'
weight = 'balanced224flips/top_weights_224flip.h5'
saveFile = 'balanced224flip'

In [16]:
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)


Confusion matrix, without normalization
[[143  18  54]
 [ 24  86  32]
 [ 56  22 101]]

Thresholded Images at 224x224 with no Augmentations


In [77]:
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224th.npy'
train_labels = 'labels_train_224th.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224th.h5'

In [78]:
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
                model_path=modelPath, model_save=top_model_weights_path)


Train on 2014 samples, validate on 510 samples
Epoch 1/50
3s - loss: 8.1830 - acc: 0.4464 - val_loss: 8.2889 - val_acc: 0.4529
Epoch 2/50
3s - loss: 7.1938 - acc: 0.5238 - val_loss: 8.9006 - val_acc: 0.4196
Epoch 3/50
3s - loss: 6.8866 - acc: 0.5328 - val_loss: 7.2767 - val_acc: 0.4980
Epoch 4/50
3s - loss: 5.9568 - acc: 0.5814 - val_loss: 7.6397 - val_acc: 0.4843
Epoch 5/50
3s - loss: 5.8106 - acc: 0.5983 - val_loss: 6.7615 - val_acc: 0.5294
Epoch 6/50
3s - loss: 5.3491 - acc: 0.6112 - val_loss: 7.0674 - val_acc: 0.5098
Epoch 7/50
3s - loss: 4.5578 - acc: 0.6594 - val_loss: 7.1049 - val_acc: 0.4922
Epoch 8/50
3s - loss: 4.3890 - acc: 0.6430 - val_loss: 5.2989 - val_acc: 0.5275
Epoch 9/50
3s - loss: 2.9174 - acc: 0.7046 - val_loss: 3.4844 - val_acc: 0.5569
Epoch 10/50
3s - loss: 1.2611 - acc: 0.7393 - val_loss: 1.4448 - val_acc: 0.5196
Epoch 11/50
3s - loss: 0.5694 - acc: 0.8059 - val_loss: 1.4692 - val_acc: 0.5412
Epoch 12/50
3s - loss: 0.4477 - acc: 0.8322 - val_loss: 1.5084 - val_acc: 0.5373
Epoch 13/50
3s - loss: 0.3319 - acc: 0.8764 - val_loss: 1.4581 - val_acc: 0.5569
Epoch 14/50
3s - loss: 0.2353 - acc: 0.9146 - val_loss: 1.7630 - val_acc: 0.5863
Epoch 15/50
3s - loss: 0.2207 - acc: 0.9156 - val_loss: 1.9107 - val_acc: 0.5627
Epoch 16/50
3s - loss: 0.1717 - acc: 0.9325 - val_loss: 1.7110 - val_acc: 0.5765
Epoch 17/50
3s - loss: 0.1418 - acc: 0.9469 - val_loss: 2.0517 - val_acc: 0.5941
Epoch 18/50
3s - loss: 0.1245 - acc: 0.9558 - val_loss: 1.8684 - val_acc: 0.5725
Epoch 19/50
3s - loss: 0.0960 - acc: 0.9647 - val_loss: 2.1547 - val_acc: 0.5667
Epoch 20/50
3s - loss: 0.0788 - acc: 0.9732 - val_loss: 2.3561 - val_acc: 0.5627
Epoch 21/50
3s - loss: 0.0832 - acc: 0.9752 - val_loss: 2.2611 - val_acc: 0.5765
Epoch 22/50
3s - loss: 0.0878 - acc: 0.9702 - val_loss: 2.4544 - val_acc: 0.5569
Epoch 23/50
3s - loss: 0.0587 - acc: 0.9796 - val_loss: 2.4578 - val_acc: 0.5588
Epoch 24/50
3s - loss: 0.0627 - acc: 0.9782 - val_loss: 2.3713 - val_acc: 0.5784
Epoch 25/50
3s - loss: 0.0359 - acc: 0.9856 - val_loss: 2.6396 - val_acc: 0.5588
Epoch 26/50
3s - loss: 0.0347 - acc: 0.9891 - val_loss: 2.6468 - val_acc: 0.5627
Epoch 27/50
3s - loss: 0.0361 - acc: 0.9876 - val_loss: 2.5269 - val_acc: 0.5706
Epoch 28/50
3s - loss: 0.0606 - acc: 0.9811 - val_loss: 2.8372 - val_acc: 0.5471
Epoch 29/50
3s - loss: 0.0460 - acc: 0.9826 - val_loss: 2.5065 - val_acc: 0.5569
Epoch 30/50
3s - loss: 0.0176 - acc: 0.9940 - val_loss: 2.7181 - val_acc: 0.5569
Epoch 31/50
3s - loss: 0.0368 - acc: 0.9911 - val_loss: 2.7062 - val_acc: 0.5902
Epoch 32/50
3s - loss: 0.0247 - acc: 0.9930 - val_loss: 2.6707 - val_acc: 0.5588
Epoch 33/50
3s - loss: 0.0199 - acc: 0.9930 - val_loss: 2.7710 - val_acc: 0.5941
Epoch 34/50
3s - loss: 0.0202 - acc: 0.9945 - val_loss: 2.9551 - val_acc: 0.5765
Epoch 35/50
3s - loss: 0.0454 - acc: 0.9841 - val_loss: 2.8813 - val_acc: 0.5647
Epoch 36/50
3s - loss: 0.0330 - acc: 0.9881 - val_loss: 2.9774 - val_acc: 0.5765
Epoch 37/50
3s - loss: 0.0260 - acc: 0.9916 - val_loss: 2.9989 - val_acc: 0.5510
Epoch 38/50
3s - loss: 0.0196 - acc: 0.9930 - val_loss: 2.8463 - val_acc: 0.5725
Epoch 39/50
3s - loss: 0.0143 - acc: 0.9965 - val_loss: 3.0877 - val_acc: 0.5549
Epoch 40/50
3s - loss: 0.0236 - acc: 0.9906 - val_loss: 3.1074 - val_acc: 0.5608
Epoch 41/50
3s - loss: 0.0102 - acc: 0.9960 - val_loss: 2.8901 - val_acc: 0.5824
Epoch 42/50
3s - loss: 0.0134 - acc: 0.9955 - val_loss: 3.1207 - val_acc: 0.5765
Epoch 43/50
3s - loss: 0.0042 - acc: 0.9990 - val_loss: 3.0206 - val_acc: 0.5824
Epoch 44/50
3s - loss: 0.0189 - acc: 0.9921 - val_loss: 3.2029 - val_acc: 0.5549
Epoch 45/50
3s - loss: 0.0120 - acc: 0.9955 - val_loss: 2.9361 - val_acc: 0.5784
Epoch 46/50
4s - loss: 0.0117 - acc: 0.9945 - val_loss: 2.8603 - val_acc: 0.5941
Epoch 47/50
3s - loss: 0.0217 - acc: 0.9935 - val_loss: 3.1290 - val_acc: 0.5863
Epoch 48/50
4s - loss: 0.0154 - acc: 0.9940 - val_loss: 3.1971 - val_acc: 0.5843
Epoch 49/50
4s - loss: 0.0097 - acc: 0.9965 - val_loss: 3.1335 - val_acc: 0.5961
Epoch 50/50
4s - loss: 0.0073 - acc: 0.9970 - val_loss: 3.3308 - val_acc: 0.5941
Network's test score [loss, accuracy]: [3.3308228604933796, 0.59411764752631091]

In [79]:
feats_loc = '224_threshold/bottleneck_features_test.npy'
feats_labs = '224_threshold/labels_test.npy'
weight = 'balanced224Threshold/top_weights_224th.h5'
saveFile = 'balanced224Threshold'

In [80]:
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)


Confusion matrix, without normalization
[[108  39  54]
 [ 15 101  22]
 [ 41  36  94]]

224x224 DDSM - Two Categories

Attempting to learn the difference between normal annd abnormal.


In [4]:
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'Balanced224Binary/bottleneck_features_train_224twoclass.npy'
train_labels = 'Balanced224Binary/labels_train_224twoclass.npy'
test_bottleneck = '224_binary/bottleneck_features_test.npy'
test_labels = '224_binary/labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'Balanced224Binary/top_weights_224twoclass.h5'

In [37]:
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
                model_path=modelPath, model_save=top_model_weights_path, epoch = 100)


(2132L, 7L, 7L, 512L)
Train on 2132 samples, validate on 726 samples
Epoch 1/100
10s - loss: 5.7163 - acc: 0.5872 - val_loss: 4.7777 - val_acc: 0.6749
Epoch 2/100
11s - loss: 5.1144 - acc: 0.6435 - val_loss: 4.6995 - val_acc: 0.6240
Epoch 3/100
10s - loss: 4.8718 - acc: 0.6482 - val_loss: 5.3574 - val_acc: 0.6460
Epoch 4/100
10s - loss: 4.8243 - acc: 0.6576 - val_loss: 5.0161 - val_acc: 0.6694
Epoch 5/100
10s - loss: 4.4224 - acc: 0.6689 - val_loss: 4.3386 - val_acc: 0.6515
Epoch 6/100
11s - loss: 4.6102 - acc: 0.6529 - val_loss: 4.3907 - val_acc: 0.6667
Epoch 7/100
11s - loss: 4.1972 - acc: 0.6787 - val_loss: 3.8385 - val_acc: 0.6694
Epoch 8/100
11s - loss: 4.1463 - acc: 0.6764 - val_loss: 4.5849 - val_acc: 0.6694
Epoch 9/100
11s - loss: 3.9566 - acc: 0.6848 - val_loss: 3.4660 - val_acc: 0.6708
Epoch 10/100
11s - loss: 3.4106 - acc: 0.6970 - val_loss: 2.9073 - val_acc: 0.6860
Epoch 11/100
10s - loss: 3.0349 - acc: 0.7036 - val_loss: 1.4431 - val_acc: 0.6584
Epoch 12/100
10s - loss: 2.1206 - acc: 0.7026 - val_loss: 1.0348 - val_acc: 0.6653
Epoch 13/100
11s - loss: 1.6728 - acc: 0.6876 - val_loss: 0.6838 - val_acc: 0.6350
Epoch 14/100
11s - loss: 1.1987 - acc: 0.6811 - val_loss: 0.6316 - val_acc: 0.6556
Epoch 15/100
11s - loss: 0.9080 - acc: 0.6829 - val_loss: 0.6282 - val_acc: 0.6419
Epoch 16/100
11s - loss: 0.7586 - acc: 0.6862 - val_loss: 0.6235 - val_acc: 0.6061
Epoch 17/100
11s - loss: 0.7021 - acc: 0.7134 - val_loss: 0.6194 - val_acc: 0.6501
Epoch 18/100
11s - loss: 0.6454 - acc: 0.7176 - val_loss: 0.6194 - val_acc: 0.6584
Epoch 19/100
10s - loss: 0.6433 - acc: 0.7350 - val_loss: 0.6021 - val_acc: 0.6832
Epoch 20/100
11s - loss: 0.6590 - acc: 0.6975 - val_loss: 0.6102 - val_acc: 0.6873
Epoch 21/100
11s - loss: 0.6192 - acc: 0.7176 - val_loss: 0.6089 - val_acc: 0.6873
Epoch 22/100
11s - loss: 0.6432 - acc: 0.7265 - val_loss: 0.6043 - val_acc: 0.6956
Epoch 23/100
11s - loss: 0.6006 - acc: 0.7265 - val_loss: 0.6076 - val_acc: 0.6818
Epoch 24/100
11s - loss: 0.5933 - acc: 0.7566 - val_loss: 0.6016 - val_acc: 0.6515
Epoch 25/100
11s - loss: 0.5753 - acc: 0.7350 - val_loss: 0.5934 - val_acc: 0.6942
Epoch 26/100
11s - loss: 0.5415 - acc: 0.7556 - val_loss: 0.6012 - val_acc: 0.6970
Epoch 27/100
11s - loss: 0.5826 - acc: 0.7411 - val_loss: 0.5925 - val_acc: 0.7025
Epoch 28/100
11s - loss: 0.5515 - acc: 0.7552 - val_loss: 0.6020 - val_acc: 0.6667
Epoch 29/100
10s - loss: 0.5246 - acc: 0.7636 - val_loss: 0.5909 - val_acc: 0.6873
Epoch 30/100
11s - loss: 0.5110 - acc: 0.7645 - val_loss: 0.6050 - val_acc: 0.6653
Epoch 31/100
11s - loss: 0.5172 - acc: 0.7777 - val_loss: 0.6083 - val_acc: 0.6818
Epoch 32/100
11s - loss: 0.4898 - acc: 0.7786 - val_loss: 0.6100 - val_acc: 0.6915
Epoch 33/100
11s - loss: 0.5255 - acc: 0.7678 - val_loss: 0.6107 - val_acc: 0.6901
Epoch 34/100
11s - loss: 0.5046 - acc: 0.7758 - val_loss: 0.6413 - val_acc: 0.6777
Epoch 35/100
11s - loss: 0.4939 - acc: 0.7903 - val_loss: 0.6631 - val_acc: 0.6612
Epoch 36/100
11s - loss: 0.5214 - acc: 0.7674 - val_loss: 0.6148 - val_acc: 0.6804
Epoch 37/100
10s - loss: 0.4944 - acc: 0.7932 - val_loss: 0.6247 - val_acc: 0.6777
Epoch 38/100
11s - loss: 0.4701 - acc: 0.7847 - val_loss: 0.6719 - val_acc: 0.6598
Epoch 39/100
11s - loss: 0.4888 - acc: 0.7941 - val_loss: 0.6371 - val_acc: 0.6887
Epoch 40/100
10s - loss: 0.4602 - acc: 0.8011 - val_loss: 0.6498 - val_acc: 0.6832
Epoch 41/100
11s - loss: 0.4532 - acc: 0.7988 - val_loss: 0.6495 - val_acc: 0.6873
Epoch 42/100
10s - loss: 0.4412 - acc: 0.8072 - val_loss: 0.6415 - val_acc: 0.7011
Epoch 43/100
11s - loss: 0.4471 - acc: 0.8110 - val_loss: 0.6589 - val_acc: 0.6887
Epoch 44/100
10s - loss: 0.4271 - acc: 0.8147 - val_loss: 0.6625 - val_acc: 0.6915
Epoch 45/100
10s - loss: 0.4798 - acc: 0.8091 - val_loss: 0.6464 - val_acc: 0.7025
Epoch 46/100
11s - loss: 0.4260 - acc: 0.8157 - val_loss: 0.6720 - val_acc: 0.6777
Epoch 47/100
11s - loss: 0.4340 - acc: 0.8110 - val_loss: 0.6542 - val_acc: 0.6956
Epoch 48/100
10s - loss: 0.4090 - acc: 0.8255 - val_loss: 0.6624 - val_acc: 0.6956
Epoch 49/100
11s - loss: 0.4382 - acc: 0.8138 - val_loss: 0.6631 - val_acc: 0.6777
Epoch 50/100
10s - loss: 0.3944 - acc: 0.8218 - val_loss: 0.6555 - val_acc: 0.6749
Epoch 51/100
11s - loss: 0.4195 - acc: 0.8208 - val_loss: 0.6640 - val_acc: 0.6873
Epoch 52/100
10s - loss: 0.3893 - acc: 0.8260 - val_loss: 0.6899 - val_acc: 0.6915
Epoch 53/100
11s - loss: 0.3982 - acc: 0.8265 - val_loss: 0.6831 - val_acc: 0.6680
Epoch 54/100
11s - loss: 0.3708 - acc: 0.8349 - val_loss: 0.7131 - val_acc: 0.6956
Epoch 55/100
11s - loss: 0.3658 - acc: 0.8452 - val_loss: 0.6873 - val_acc: 0.6942
Epoch 56/100
11s - loss: 0.3620 - acc: 0.8368 - val_loss: 0.6944 - val_acc: 0.6873
Epoch 57/100
11s - loss: 0.3549 - acc: 0.8490 - val_loss: 0.7045 - val_acc: 0.7039
Epoch 58/100
10s - loss: 0.3647 - acc: 0.8466 - val_loss: 0.6873 - val_acc: 0.7039
Epoch 59/100
10s - loss: 0.3488 - acc: 0.8588 - val_loss: 0.7395 - val_acc: 0.6818
Epoch 60/100
10s - loss: 0.3453 - acc: 0.8462 - val_loss: 0.7000 - val_acc: 0.7107
Epoch 61/100
11s - loss: 0.3549 - acc: 0.8565 - val_loss: 0.7769 - val_acc: 0.6515
Epoch 62/100
11s - loss: 0.3376 - acc: 0.8640 - val_loss: 0.7513 - val_acc: 0.6901
Epoch 63/100
10s - loss: 0.3489 - acc: 0.8485 - val_loss: 0.7293 - val_acc: 0.7039
Epoch 64/100
11s - loss: 0.3269 - acc: 0.8602 - val_loss: 0.7306 - val_acc: 0.6763
Epoch 65/100
11s - loss: 0.3725 - acc: 0.8447 - val_loss: 0.7240 - val_acc: 0.6901
Epoch 66/100
11s - loss: 0.3465 - acc: 0.8424 - val_loss: 0.7375 - val_acc: 0.6846
Epoch 67/100
11s - loss: 0.3359 - acc: 0.8663 - val_loss: 0.7407 - val_acc: 0.7052
Epoch 68/100
11s - loss: 0.3109 - acc: 0.8720 - val_loss: 0.7848 - val_acc: 0.6956
Epoch 69/100
11s - loss: 0.3178 - acc: 0.8729 - val_loss: 0.7867 - val_acc: 0.6791
Epoch 70/100
11s - loss: 0.3439 - acc: 0.8513 - val_loss: 0.7671 - val_acc: 0.6928
Epoch 71/100
11s - loss: 0.3146 - acc: 0.8588 - val_loss: 0.8019 - val_acc: 0.6915
Epoch 72/100
11s - loss: 0.3285 - acc: 0.8504 - val_loss: 0.7802 - val_acc: 0.6915
Epoch 73/100
11s - loss: 0.3142 - acc: 0.8691 - val_loss: 0.8149 - val_acc: 0.6887
Epoch 74/100
11s - loss: 0.3189 - acc: 0.8574 - val_loss: 0.8113 - val_acc: 0.6777
Epoch 75/100
11s - loss: 0.3076 - acc: 0.8705 - val_loss: 0.7949 - val_acc: 0.6818
Epoch 76/100
11s - loss: 0.2846 - acc: 0.8752 - val_loss: 0.8048 - val_acc: 0.6791
Epoch 77/100
11s - loss: 0.2790 - acc: 0.8846 - val_loss: 0.8443 - val_acc: 0.6818
Epoch 78/100
11s - loss: 0.2943 - acc: 0.8818 - val_loss: 0.8446 - val_acc: 0.6791
Epoch 79/100
11s - loss: 0.2860 - acc: 0.8813 - val_loss: 0.8304 - val_acc: 0.6804
Epoch 80/100
11s - loss: 0.2887 - acc: 0.8917 - val_loss: 0.8387 - val_acc: 0.6928
Epoch 81/100
11s - loss: 0.3050 - acc: 0.8776 - val_loss: 0.8286 - val_acc: 0.6970
Epoch 82/100
11s - loss: 0.2738 - acc: 0.8860 - val_loss: 0.8261 - val_acc: 0.6915
Epoch 83/100
11s - loss: 0.2692 - acc: 0.8973 - val_loss: 0.8764 - val_acc: 0.6901
Epoch 84/100
11s - loss: 0.2641 - acc: 0.8945 - val_loss: 0.8682 - val_acc: 0.6736
Epoch 85/100
11s - loss: 0.2659 - acc: 0.8996 - val_loss: 0.8350 - val_acc: 0.6915
Epoch 86/100
11s - loss: 0.2792 - acc: 0.8884 - val_loss: 0.8458 - val_acc: 0.6804
Epoch 87/100
11s - loss: 0.2532 - acc: 0.8963 - val_loss: 0.8811 - val_acc: 0.6818
Epoch 88/100
11s - loss: 0.2648 - acc: 0.8912 - val_loss: 0.8449 - val_acc: 0.6928
Epoch 89/100
11s - loss: 0.2594 - acc: 0.8945 - val_loss: 0.8800 - val_acc: 0.6694
Epoch 90/100
11s - loss: 0.2542 - acc: 0.9010 - val_loss: 0.8706 - val_acc: 0.6997
Epoch 91/100
11s - loss: 0.2484 - acc: 0.8996 - val_loss: 0.8909 - val_acc: 0.6873
Epoch 92/100
11s - loss: 0.2725 - acc: 0.8940 - val_loss: 0.9312 - val_acc: 0.6584
Epoch 93/100
11s - loss: 0.2638 - acc: 0.8912 - val_loss: 0.8966 - val_acc: 0.7052
Epoch 94/100
11s - loss: 0.2289 - acc: 0.9020 - val_loss: 0.9240 - val_acc: 0.6887
Epoch 95/100
11s - loss: 0.2402 - acc: 0.9038 - val_loss: 0.9173 - val_acc: 0.6956
Epoch 96/100
11s - loss: 0.2430 - acc: 0.8982 - val_loss: 0.8958 - val_acc: 0.6997
Epoch 97/100
10s - loss: 0.2519 - acc: 0.8949 - val_loss: 0.9070 - val_acc: 0.6791
Epoch 98/100
11s - loss: 0.2291 - acc: 0.9071 - val_loss: 0.9299 - val_acc: 0.6915
Epoch 99/100
11s - loss: 0.2470 - acc: 0.9085 - val_loss: 0.9220 - val_acc: 0.6860
Epoch 100/100
11s - loss: 0.2269 - acc: 0.9062 - val_loss: 0.9467 - val_acc: 0.6777
Network's test score [loss, accuracy]: [0.94668246325382521, 0.67768595024902301]

In [21]:
feats_loc = '224_binary/bottleneck_features_test.npy'
feats_labs = '224_binary/labels_test.npy'
weight = 'balanced224Binary/top_weights_224twoclass.h5'
saveFile = 'balanced224Twoclass'

In [22]:
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)


Confusion matrix, without normalization
[[294 111]
 [126 195]]

224x224 DDSM Thresholded Images - Two Categories


In [33]:
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224th_twoclass.npy'
train_labels = 'labels_train_224th_twoclass.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224th_twoclass.h5'

In [35]:
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
                model_path=modelPath, model_save=top_model_weights_path)


Train on 2132 samples, validate on 688 samples
Epoch 1/50
2s - loss: 5.6654 - acc: 0.6055 - val_loss: 5.7420 - val_acc: 0.6134
Epoch 2/50
2s - loss: 4.9361 - acc: 0.6585 - val_loss: 5.2615 - val_acc: 0.6163
Epoch 3/50
2s - loss: 4.8171 - acc: 0.6538 - val_loss: 4.8884 - val_acc: 0.6526
Epoch 4/50
2s - loss: 4.2685 - acc: 0.6895 - val_loss: 4.9389 - val_acc: 0.6308
Epoch 5/50
2s - loss: 3.4425 - acc: 0.7270 - val_loss: 4.1251 - val_acc: 0.6206
Epoch 6/50
2s - loss: 1.9883 - acc: 0.7265 - val_loss: 1.1036 - val_acc: 0.6206
Epoch 7/50
2s - loss: 0.4826 - acc: 0.7871 - val_loss: 0.7872 - val_acc: 0.6076
Epoch 8/50
2s - loss: 0.3752 - acc: 0.8335 - val_loss: 0.8678 - val_acc: 0.6483
Epoch 9/50
2s - loss: 0.3141 - acc: 0.8635 - val_loss: 1.0520 - val_acc: 0.6613
Epoch 10/50
2s - loss: 0.2520 - acc: 0.8912 - val_loss: 1.0241 - val_acc: 0.6483
Epoch 11/50
2s - loss: 0.1943 - acc: 0.9207 - val_loss: 1.1214 - val_acc: 0.6308
Epoch 12/50
2s - loss: 0.1705 - acc: 0.9287 - val_loss: 1.1477 - val_acc: 0.6453
Epoch 13/50
2s - loss: 0.1196 - acc: 0.9597 - val_loss: 1.1482 - val_acc: 0.6221
Epoch 14/50
2s - loss: 0.0993 - acc: 0.9601 - val_loss: 1.3043 - val_acc: 0.6512
Epoch 15/50
2s - loss: 0.0866 - acc: 0.9723 - val_loss: 1.4471 - val_acc: 0.6250
Epoch 16/50
2s - loss: 0.0721 - acc: 0.9742 - val_loss: 1.2648 - val_acc: 0.6192
Epoch 17/50
2s - loss: 0.0543 - acc: 0.9780 - val_loss: 1.3697 - val_acc: 0.6352
Epoch 18/50
2s - loss: 0.0369 - acc: 0.9892 - val_loss: 1.4602 - val_acc: 0.6424
Epoch 19/50
2s - loss: 0.0386 - acc: 0.9892 - val_loss: 1.5137 - val_acc: 0.6541
Epoch 20/50
2s - loss: 0.0245 - acc: 0.9939 - val_loss: 1.6529 - val_acc: 0.6424
Epoch 21/50
2s - loss: 0.0196 - acc: 0.9948 - val_loss: 1.6608 - val_acc: 0.6453
Epoch 22/50
2s - loss: 0.0222 - acc: 0.9939 - val_loss: 1.8662 - val_acc: 0.6555
Epoch 23/50
2s - loss: 0.0367 - acc: 0.9859 - val_loss: 1.7436 - val_acc: 0.6337
Epoch 24/50
2s - loss: 0.0169 - acc: 0.9944 - val_loss: 1.7366 - val_acc: 0.6279
Epoch 25/50
2s - loss: 0.0225 - acc: 0.9939 - val_loss: 1.8307 - val_acc: 0.6424
Epoch 26/50
2s - loss: 0.0170 - acc: 0.9948 - val_loss: 1.8614 - val_acc: 0.6672
Epoch 27/50
2s - loss: 0.0255 - acc: 0.9892 - val_loss: 1.9788 - val_acc: 0.6628
Epoch 28/50
2s - loss: 0.0111 - acc: 0.9972 - val_loss: 1.9517 - val_acc: 0.6337
Epoch 29/50
2s - loss: 0.0096 - acc: 0.9972 - val_loss: 2.0733 - val_acc: 0.6497
Epoch 30/50
2s - loss: 0.0155 - acc: 0.9934 - val_loss: 2.0157 - val_acc: 0.6395
Epoch 31/50
2s - loss: 0.0096 - acc: 0.9972 - val_loss: 2.1630 - val_acc: 0.6395
Epoch 32/50
2s - loss: 0.0195 - acc: 0.9939 - val_loss: 2.2044 - val_acc: 0.6410
Epoch 33/50
2s - loss: 0.0082 - acc: 0.9958 - val_loss: 2.0791 - val_acc: 0.6439
Epoch 34/50
2s - loss: 0.0059 - acc: 0.9981 - val_loss: 2.2785 - val_acc: 0.6526
Epoch 35/50
2s - loss: 0.0052 - acc: 0.9986 - val_loss: 2.1873 - val_acc: 0.6541
Epoch 36/50
2s - loss: 0.0134 - acc: 0.9958 - val_loss: 2.2006 - val_acc: 0.6468
Epoch 37/50
2s - loss: 0.0069 - acc: 0.9962 - val_loss: 2.3117 - val_acc: 0.6526
Epoch 38/50
2s - loss: 0.0056 - acc: 0.9981 - val_loss: 2.2882 - val_acc: 0.6613
Epoch 39/50
2s - loss: 0.0107 - acc: 0.9953 - val_loss: 2.3937 - val_acc: 0.6381
Epoch 40/50
2s - loss: 0.0192 - acc: 0.9948 - val_loss: 2.3679 - val_acc: 0.6265
Epoch 41/50
3s - loss: 0.0073 - acc: 0.9981 - val_loss: 2.4673 - val_acc: 0.6453
Epoch 42/50
3s - loss: 0.0094 - acc: 0.9962 - val_loss: 2.4377 - val_acc: 0.6235
Epoch 43/50
3s - loss: 0.0040 - acc: 0.9991 - val_loss: 2.2931 - val_acc: 0.6410
Epoch 44/50
3s - loss: 0.0053 - acc: 0.9986 - val_loss: 2.2906 - val_acc: 0.6468
Epoch 45/50
3s - loss: 0.0063 - acc: 0.9991 - val_loss: 2.4707 - val_acc: 0.6512
Epoch 46/50
3s - loss: 0.0107 - acc: 0.9953 - val_loss: 2.3623 - val_acc: 0.6395
Epoch 47/50
3s - loss: 0.0080 - acc: 0.9977 - val_loss: 2.3055 - val_acc: 0.6613
Epoch 48/50
3s - loss: 0.0045 - acc: 0.9991 - val_loss: 2.3601 - val_acc: 0.6497
Epoch 49/50
3s - loss: 0.0024 - acc: 0.9995 - val_loss: 2.4945 - val_acc: 0.6483
Epoch 50/50
4s - loss: 0.0035 - acc: 0.9991 - val_loss: 2.4660 - val_acc: 0.6555
Network's test score [loss, accuracy]: [2.465950372607209, 0.65552325581395354]

In [40]:
feats_loc = '224_binary/bottleneck_features_test.npy'
feats_labs = '224_binary/labels_test.npy'
weight = 'balanced224Th_Binary/top_weights_224th_twoclass.h5'
saveFile = 'balanced224Th_Twoclass'

In [41]:
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)


Confusion matrix, without normalization
[[334  71]
 [218 103]]

Results

All results below are run against the train, test, validate csv files located at Breast Cancer Github Data

Aspect Ratio Squared Raw DDSM Images with Artifacts

1) Run 1: 150x150 image size, 50 Epochs, Batch Size 64

* Network's test score [loss, accuracy]: [2.4609192387381595, 0.58582089552238803]

2) Run 2: 150x150 image size, 50 Epochs, Batch Size 64, Full Augmentations

* Network's test score [loss, accuracy]: [2.8447668943832172, 0.57276119402985071]

3) Run 3: 224x224 image size, 50 Epochs, Batch Size 64

* Network's test score [loss, accuracy]: [2.7020884663311402, 0.59888059701492535]

4) Run 4: 224x224 image size, 50 Epochs, Bath Size 64, Vertical and Horizontal Flips

* Network's test score [loss, accuracy]: [3.1939952764938129, 0.57276119402985071]

Aspect Ratio Squard Thresholded DDSM Images

1) Run 1: 224x224 image size, 50 Epochs, Batch Size 64

* Network's test score [loss, accuracy]: [3.3308228604933796, 0.59411764752631091]

Two Class Problem

1) Run 1: 224x224 image size, DDSM images with Artifacts, 50 Epochs, Batch Size = 64

* Network's test score [loss, accuracy]: [2.0707934257412743, 0.69834710760221663]

2) Run 2: 224x224 image size, Thresholded images, 50 Epochs, Batch Size = 64

* Network's test score [loss, accuracy]: [2.465950372607209, 0.65552325581395354]