New York Artificial Intelligence in Healthcare:

Hack Night 01/16/2019

Human Protein Atlas Competition

Authors:

Dr. Rahul Remanan

Matthew Eng

Joseph Chen

This session we explored the Human Protein Atlas Image Classification and utilized the kernal Pretrained InceptionResNetV2 base classifier.

In this competition, Kagglers will develop models capable of classifying mixed patterns of proteins in microscope images. The Human Protein Atlas will use these models to build a tool integrated with their smart-microscopy system to identify a protein's location(s) from a high-throughput image.

When running this notebook on colab, run a GPU instance in order to ensure ample space when unzipping dataset. Go to Runtime --> Change Runtime type --> GPU.


In [0]:
!pip3 install kaggle
!pip3 install google


Requirement already satisfied: kaggle in /usr/local/lib/python3.6/dist-packages (1.5.1.1)
Requirement already satisfied: urllib3<1.23.0,>=1.15 in /usr/local/lib/python3.6/dist-packages (from kaggle) (1.22)
Requirement already satisfied: six>=1.10 in /usr/local/lib/python3.6/dist-packages (from kaggle) (1.11.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from kaggle) (2018.11.29)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.6/dist-packages (from kaggle) (2.5.3)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from kaggle) (2.18.4)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from kaggle) (4.28.1)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.6/dist-packages (from kaggle) (2.0.1)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->kaggle) (3.0.4)
Requirement already satisfied: idna<2.7,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->kaggle) (2.6)
Requirement already satisfied: Unidecode>=0.04.16 in /usr/local/lib/python3.6/dist-packages (from python-slugify->kaggle) (1.0.23)
Requirement already satisfied: google in /usr/local/lib/python3.6/dist-packages (2.0.1)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.6/dist-packages (from google) (4.6.3)

Authenticate Kaggle by uploading kaggle.json file


In [0]:
from google.colab import files
upload = files.upload()


Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving kaggle.json to kaggle.json

In [0]:
!mkdir ~/.kaggle

In [0]:
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json

Download dataset

Ensure that you have joined the competition on your kaggle account


In [0]:
!kaggle competitions download -c human-protein-atlas-image-classification


Downloading sample_submission.csv to /content
  0% 0.00/446k [00:00<?, ?B/s]
100% 446k/446k [00:00<00:00, 62.5MB/s]
Downloading train.csv to /content
  0% 0.00/1.22M [00:00<?, ?B/s]
100% 1.22M/1.22M [00:00<00:00, 80.6MB/s]
Downloading test.zip to /content
100% 4.37G/4.37G [01:17<00:00, 52.6MB/s]
100% 4.37G/4.37G [01:17<00:00, 60.3MB/s]
Downloading train.zip to /content
100% 13.1G/13.1G [04:20<00:00, 76.0MB/s]
100% 13.1G/13.1G [04:20<00:00, 53.9MB/s]

Unzip files


In [0]:
!mkdir ./human_protein_atlas/
!mkdir ./human_protein_atlas/train
!mkdir ./human_protein_atlas/test


mkdir: cannot create directory ‘./human_protein_atlas/’: File exists
mkdir: cannot create directory ‘./human_protein_atlas/train’: File exists
mkdir: cannot create directory ‘./human_protein_atlas/test’: File exists

In [0]:
!mv train.csv ./human_protein_atlas/train.csv

In [0]:
!unzip -q ./train.zip -d ./human_protein_atlas/train

In [0]:
!unzip -q ./test.zip -d ./human_protein_atlas/test

Notebook

This notebook uses an inception resnet v2 model from the kernel Pretrained InceptionResNetV2 base classifier. We are using a pre-designed architecture that has been trained and does well on general image classification. We add a few more layers that are custom to our specific task.


In [0]:
import os, sys, math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image
import cv2
from imgaug import augmenters as iaa
from tqdm import tqdm

import warnings
warnings.filterwarnings("ignore")

In [0]:
INPUT_SHAPE = (299,299,3)
BATCH_SIZE = 10

Load dataset info


In [0]:
path_to_train = './human_protein_atlas/train/'
data = pd.read_csv('human_protein_atlas/train.csv')

train_dataset_info = []
for name, labels in zip(data['Id'], data['Target'].str.split(' ')):
    train_dataset_info.append({
        'path':os.path.join(path_to_train, name),
        'labels':np.array([int(label) for label in labels])})
train_dataset_info = np.array(train_dataset_info)

In [0]:
from sklearn.model_selection import train_test_split
train_ids, test_ids, train_targets, test_target = train_test_split(
    data['Id'], data['Target'], test_size=0.2, random_state=42)

Create datagenerator


In [0]:
class data_generator:
    
    def create_train(dataset_info, batch_size, shape, augument=True):
        assert shape[2] == 3
        while True:
            random_indexes = np.random.choice(len(dataset_info), batch_size)
            batch_images = np.empty((batch_size, shape[0], shape[1], shape[2]))
            batch_labels = np.zeros((batch_size, 28))
            for i, idx in enumerate(random_indexes):
                image = data_generator.load_image(
                    dataset_info[idx]['path'], shape)   
                if augument:
                    image = data_generator.augment(image)
                batch_images[i] = image
                batch_labels[i][dataset_info[idx]['labels']] = 1
            yield batch_images, batch_labels
            
    
    def load_image(path, shape):
        R = np.array(Image.open(path+'_red.png'))
        G = np.array(Image.open(path+'_green.png'))
        B = np.array(Image.open(path+'_blue.png'))
        Y = np.array(Image.open(path+'_yellow.png'))

        image = np.stack((
            R/2 + Y/2, 
            G/2 + Y/2, 
            B),-1)
        
        image = cv2.resize(image, (shape[0], shape[1]))
        image = np.divide(image, 255)
        return image  
                
            
    def augment(image):
        augment_img = iaa.Sequential([
            iaa.OneOf([
                iaa.Affine(rotate=0),
                iaa.Affine(rotate=90),
                iaa.Affine(rotate=180),
                iaa.Affine(rotate=270),
                iaa.Fliplr(0.5),
                iaa.Flipud(0.5),
            ])], random_order=True)
        
        image_aug = augment_img.augment_image(image)
        return image_aug

Show data


In [0]:
# create train datagen
input_shape = (299, 299, 3)
train_datagen = data_generator.create_train(
    train_dataset_info, 5, input_shape, augument=True)

In [0]:
images, labels = next(train_datagen)

fig, ax = plt.subplots(1,5,figsize=(25,5))
for i in range(5):
    ax[i].imshow(images[i])
print('min: {0}, max: {1}'.format(images.min(), images.max()))


min: 0.0, max: 1.0

Create model


In [0]:
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, load_model
from keras.layers import Activation
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Input
from keras.layers import BatchNormalization
from keras.layers import Conv2D
from keras.models import Model
from keras.applications import InceptionResNetV2
from keras.callbacks import ModelCheckpoint
from keras.callbacks import LambdaCallback
from keras.callbacks import Callback
from keras import metrics
from keras.optimizers import Adam 
from keras import backend as K
import tensorflow as tf
import keras

def create_model(input_shape, n_out):
    
    pretrain_model = InceptionResNetV2(
        include_top=False, 
        weights='imagenet', 
        input_shape=input_shape)    
    
    input_tensor = Input(shape=input_shape)
    bn = BatchNormalization()(input_tensor)
    x = pretrain_model(bn)
    x = Conv2D(128, kernel_size=(1,1), activation='relu')(x)
    x = Flatten()(x)
    x = Dropout(0.5)(x)
    x = Dense(512, activation='relu')(x)
    x = Dropout(0.5)(x)
    output = Dense(n_out, activation='sigmoid')(x)
    model = Model(input_tensor, output)
    
    return model


Using TensorFlow backend.

In [0]:
def f1(y_true, y_pred):
    tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)
    fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)
    fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)

    p = tp / (tp + fp + K.epsilon())
    r = tp / (tp + fn + K.epsilon())

    f1 = 2*p*r / (p+r+K.epsilon())
    f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)
    return K.mean(f1)

In [0]:
def show_history(history):
    fig, ax = plt.subplots(1, 3, figsize=(15,5))
    ax[0].set_title('loss')
    ax[0].plot(history.epoch, history.history["loss"], label="Train loss")
    ax[0].plot(history.epoch, history.history["val_loss"], label="Validation loss")
    ax[1].set_title('f1')
    ax[1].plot(history.epoch, history.history["f1"], label="Train f1")
    ax[1].plot(history.epoch, history.history["val_f1"], label="Validation f1")
    ax[2].set_title('acc')
    ax[2].plot(history.epoch, history.history["acc"], label="Train acc")
    ax[2].plot(history.epoch, history.history["val_acc"], label="Validation acc")
    ax[0].legend()
    ax[1].legend()
    ax[2].legend()

In [0]:
keras.backend.clear_session()

model = create_model(
    input_shape=(299,299,3), 
    n_out=28)

model.summary()


Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.7/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5
219062272/219055592 [==============================] - 7s 0us/step
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         (None, 299, 299, 3)       0         
_________________________________________________________________
batch_normalization_204 (Bat (None, 299, 299, 3)       12        
_________________________________________________________________
inception_resnet_v2 (Model)  (None, 8, 8, 1536)        54336736  
_________________________________________________________________
conv2d_204 (Conv2D)          (None, 8, 8, 128)         196736    
_________________________________________________________________
flatten_1 (Flatten)          (None, 8192)              0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 8192)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               4194816   
_________________________________________________________________
dropout_2 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 28)                14364     
=================================================================
Total params: 58,742,664
Trainable params: 58,682,114
Non-trainable params: 60,550
_________________________________________________________________

Train model


In [0]:
checkpointer = ModelCheckpoint(
    './InceptionResNetV2.model', monitor = 'val_f1',
    verbose=2, save_best_only=True)

# no data augmentation training
train_generator = data_generator.create_train(
    train_dataset_info[train_ids.index], BATCH_SIZE, INPUT_SHAPE, augument=False)
validation_generator = data_generator.create_train(
    train_dataset_info[test_ids.index], 256, INPUT_SHAPE, augument=False)

model.layers[2].trainable = False

model.compile(
    loss='categorical_crossentropy',  
    optimizer=Adam(1e-3),
    metrics=['acc', f1])

history = model.fit_generator(
    train_generator,
    steps_per_epoch=100,
    validation_data=next(validation_generator),
    epochs=1, 
    verbose=1,
    callbacks=[checkpointer])


Epoch 1/1
100/100 [==============================] - 104s 1s/step - loss: 4.6811 - acc: 0.3330 - f1: 0.0824 - val_loss: 4.7740 - val_acc: 0.4180 - val_f1: 0.0829

Epoch 00001: val_loss improved from inf to 4.77400, saving model to ./InceptionResNetV2.model

In [0]:
# To prevent error during training, custom f1 scoring object must be defined
from keras.utils.generic_utils import get_custom_objects

get_custom_objects().update({'f1': f1})

In [0]:
# Take a checkpoint model and load into keras
from keras.models import load_model
checkpointer_savepath = './InceptionResNetV2.model'
model = load_model(checkpointer_savepath)

In [0]:
show_history(history)


Bayesian Optimization


In [0]:
!pip install scikit-optimize


Collecting scikit-optimize
  Downloading https://files.pythonhosted.org/packages/f4/44/60f82c97d1caa98752c7da2c1681cab5c7a390a0fdd3a55fac672b321cac/scikit_optimize-0.5.2-py2.py3-none-any.whl (74kB)
    100% |████████████████████████████████| 81kB 6.9MB/s 
Requirement already satisfied: scikit-learn>=0.19.1 in /usr/local/lib/python3.6/dist-packages (from scikit-optimize) (0.20.2)
Requirement already satisfied: scipy>=0.14.0 in /usr/local/lib/python3.6/dist-packages (from scikit-optimize) (1.1.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from scikit-optimize) (1.14.6)
Installing collected packages: scikit-optimize
Successfully installed scikit-optimize-0.5.2

In [0]:
from skopt.space import Real
from skopt.utils import use_named_args
from skopt import gp_minimize

def create_model_and_compile(input_shape, n_out, lr):
    """
    Args:
      input_shape: 
      n_out: number of output classes
    """
    pretrain_model = InceptionResNetV2(
        include_top=False, 
        weights='imagenet', 
        input_shape=input_shape)    
    
    input_tensor = Input(shape=input_shape)
    bn = BatchNormalization()(input_tensor)
    x = pretrain_model(bn)
    x = Conv2D(128, kernel_size=(1,1), activation='relu')(x)
    x = Flatten()(x)
    x = Dropout(0.5)(x)
    x = Dense(512, activation='relu')(x)
    x = Dropout(0.5)(x)
    output = Dense(n_out, activation='sigmoid')(x)
    model = Model(input_tensor, output)
    
    model.layers[2].trainable = False
    model.compile(
    loss='categorical_crossentropy',  
    optimizer=Adam(lr = lr),
    metrics=['acc', f1])
    
    return model

In [0]:
path_best_model = '/content/'
best_f1 = history.history['val_f1'][-1]
dimensions = [Real(name='learning_rate', low=1e-6, high=1e-3, prior='log-uniform'),]
#           Integer(name='num_nodes', low=10, high=256),
#           Categorical(name='activation', categories=['relu', 'sigmoid'])]

In [0]:
@use_named_args(dimensions=dimensions)
def fitness(learning_rate):

    model = create_model_and_compile(input_shape = (299, 299, 3), n_out = 28, lr = learning_rate)
    
    history = model.fit_generator(
        train_generator,
        steps_per_epoch=100,
        validation_data=next(validation_generator),
        epochs=1, 
        verbose=1,
        callbacks=[checkpointer])
    
    val_f1 = history.history['val_f1'][-1]
    global best_f1

    if val_f1<best_f1:
        best_f1 = val_f1

    del model
    K.clear_session()

    return val_f1

In [0]:
history.history


Out[0]:
{'acc': [0.3330000063031912],
 'f1': [0.0823672465607524],
 'loss': [4.681069808006287],
 'val_acc': [0.41796875424915925],
 'val_f1': [0.08286489403690211],
 'val_loss': [4.774004681035876]}

In [0]:
#running the automatic creation of models using gp_minimize and fitness function
from skopt import gp_minimize
search_result = gp_minimize(func=fitness,
                            dimensions=dimensions,
                            acq_func='EI',
                            n_calls=10,)
#                             x0=default_parameters)


Epoch 1/1
100/100 [==============================] - 157s 2s/step - loss: 4.6325 - acc: 0.2170 - f1: 0.0842 - val_loss: 4.6346 - val_acc: 0.4219 - val_f1: 0.0819

Epoch 00001: val_loss improved from 4.77400 to 4.63458, saving model to ./InceptionResNetV2.model
Epoch 1/1
100/100 [==============================] - 102s 1s/step - loss: 4.8479 - acc: 0.2980 - f1: 0.0850 - val_loss: 4.8491 - val_acc: 0.4609 - val_f1: 0.0836

Epoch 00001: val_loss did not improve from 4.63458
Epoch 1/1
100/100 [==============================] - 101s 1s/step - loss: 5.5150 - acc: 0.0620 - f1: 0.0813 - val_loss: 5.0525 - val_acc: 0.2656 - val_f1: 0.0846

Epoch 00001: val_loss did not improve from 4.63458
Epoch 1/1
100/100 [==============================] - 101s 1s/step - loss: 5.3732 - acc: 0.0980 - f1: 0.0805 - val_loss: 4.7982 - val_acc: 0.4531 - val_f1: 0.0857

Epoch 00001: val_loss did not improve from 4.63458
Epoch 1/1
100/100 [==============================] - 103s 1s/step - loss: 4.8384 - acc: 0.2820 - f1: 0.0858 - val_loss: 4.3275 - val_acc: 0.3945 - val_f1: 0.0751

Epoch 00001: val_loss improved from 4.63458 to 4.32751, saving model to ./InceptionResNetV2.model
Epoch 1/1
100/100 [==============================] - 102s 1s/step - loss: 5.4506 - acc: 0.0910 - f1: 0.0831 - val_loss: 5.0616 - val_acc: 0.4063 - val_f1: 0.0892

Epoch 00001: val_loss did not improve from 4.32751
Epoch 1/1
100/100 [==============================] - 103s 1s/step - loss: 5.9167 - acc: 0.0330 - f1: 0.0793 - val_loss: 5.1392 - val_acc: 0.2461 - val_f1: 0.0836

Epoch 00001: val_loss did not improve from 4.32751
Epoch 1/1
100/100 [==============================] - 104s 1s/step - loss: 4.8639 - acc: 0.2360 - f1: 0.0843 - val_loss: 4.3524 - val_acc: 0.2617 - val_f1: 0.0808

Epoch 00001: val_loss did not improve from 4.32751
Epoch 1/1
100/100 [==============================] - 101s 1s/step - loss: 5.1343 - acc: 0.1980 - f1: 0.0893 - val_loss: 4.8266 - val_acc: 0.2891 - val_f1: 0.0867

Epoch 00001: val_loss did not improve from 4.32751
Epoch 1/1
100/100 [==============================] - 103s 1s/step - loss: 5.5424 - acc: 0.1470 - f1: 0.0809 - val_loss: 5.4099 - val_acc: 0.4453 - val_f1: 0.0912

Epoch 00001: val_loss did not improve from 4.32751

In [0]:
search_result.x


Out[0]:
[0.000656227244977741]

Training with Data Aug


In [0]:
# with data augmentation
train_generator = data_generator.create_train(
    train_dataset_info[train_ids.index], BATCH_SIZE, INPUT_SHAPE, augument=True)
validation_generator = data_generator.create_train(
    train_dataset_info[test_ids.index], 256, INPUT_SHAPE, augument=False)

model.layers[2].trainable = True

model.compile(
    loss='categorical_crossentropy',  
    optimizer=Adam(1e-4),
    metrics=['acc', f1])

history = model.fit_generator(
    train_generator,
    steps_per_epoch=100,
    validation_data=next(validation_generator),
    epochs=180, 
    verbose=1,
    callbacks=[checkpointer])


Epoch 1/180
100/100 [==============================] - 233s 2s/step - loss: 4.3294 - acc: 0.3900 - f1: 0.0638 - val_loss: 4.5161 - val_acc: 0.4180 - val_f1: 0.0683

Epoch 00001: val_loss did not improve from 4.11580
Epoch 2/180
100/100 [==============================] - 127s 1s/step - loss: 4.3912 - acc: 0.4040 - f1: 0.0664 - val_loss: 4.6168 - val_acc: 0.4180 - val_f1: 0.0669

Epoch 00002: val_loss did not improve from 4.11580
Epoch 3/180
100/100 [==============================] - 127s 1s/step - loss: 4.1231 - acc: 0.4170 - f1: 0.0638 - val_loss: 5.6656 - val_acc: 0.4180 - val_f1: 0.0687

Epoch 00003: val_loss did not improve from 4.11580
Epoch 4/180
100/100 [==============================] - 127s 1s/step - loss: 4.2726 - acc: 0.3960 - f1: 0.0661 - val_loss: 4.5053 - val_acc: 0.4180 - val_f1: 0.0675

Epoch 00004: val_loss did not improve from 4.11580
Epoch 5/180
100/100 [==============================] - 127s 1s/step - loss: 4.2452 - acc: 0.3980 - f1: 0.0660 - val_loss: 4.4778 - val_acc: 0.4180 - val_f1: 0.0671

Epoch 00005: val_loss did not improve from 4.11580
Epoch 6/180
100/100 [==============================] - 127s 1s/step - loss: 4.3132 - acc: 0.4160 - f1: 0.0678 - val_loss: 4.4439 - val_acc: 0.4180 - val_f1: 0.0686

Epoch 00006: val_loss did not improve from 4.11580
Epoch 7/180
100/100 [==============================] - 127s 1s/step - loss: 4.2320 - acc: 0.4190 - f1: 0.0681 - val_loss: 4.4541 - val_acc: 0.4180 - val_f1: 0.0720

Epoch 00007: val_loss did not improve from 4.11580
Epoch 8/180
 15/100 [===>..........................] - ETA: 1:41 - loss: 4.1500 - acc: 0.4667 - f1: 0.0704
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-50-97fafb9a6ab2> in <module>()
     17     epochs=180,
     18     verbose=1,
---> 19     callbacks=[checkpointer])

/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name + '` call to the ' +
     90                               'Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   1416             use_multiprocessing=use_multiprocessing,
   1417             shuffle=shuffle,
-> 1418             initial_epoch=initial_epoch)
   1419 
   1420     @interfaces.legacy_generator_methods_support

/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
    215                 outs = model.train_on_batch(x, y,
    216                                             sample_weight=sample_weight,
--> 217                                             class_weight=class_weight)
    218 
    219                 outs = to_list(outs)

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
   1215             ins = x + y + sample_weights
   1216         self._make_train_function()
-> 1217         outputs = self.train_function(ins)
   1218         return unpack_singleton(outputs)
   1219 

/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
   2713                 return self._legacy_call(inputs)
   2714 
-> 2715             return self._call(inputs)
   2716         else:
   2717             if py_any(is_tensor(x) for x in inputs):

/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in _call(self, inputs)
   2673             fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata)
   2674         else:
-> 2675             fetched = self._callable_fn(*array_vals)
   2676         return fetched[:len(self.outputs)]
   2677 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
   1437           ret = tf_session.TF_SessionRunCallable(
   1438               self._session._session, self._handle, args, status,
-> 1439               run_metadata_ptr)
   1440         if run_metadata:
   1441           proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

KeyboardInterrupt: 

In [0]:
show_history(history)

Create submit


In [0]:
model = load_model(
    './InceptionResNetV2.model', 
    custom_objects={'f1': f1})

In [0]:
submit = pd.read_csv('./sample_submission.csv')

In [0]:
%%time
predicted = []
for name in tqdm(submit['Id']):
    path = os.path.join('./human_protein_atlas/test/', name)
    image = data_generator.load_image(path, INPUT_SHAPE)
    score_predict = model.predict(image[np.newaxis])[0]
    label_predict = np.arange(28)[score_predict>=0.2]
    str_predict_label = ' '.join(str(l) for l in label_predict)
    predicted.append(str_predict_label)

In [0]:
submit['Predicted'] = predicted
submit.to_csv('submission.csv', index=False)