Here is a simpler example of the use of LIME for image classification by using Keras (v2 or greater)


In [1]:
import os
import keras
from keras.applications import inception_v3 as inc_net
from keras.preprocessing import image
from keras.applications.imagenet_utils import decode_predictions
from skimage.io import imread
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
print('Notebook run using keras:', keras.__version__)


Using TensorFlow backend.
Notebook run using keras: 2.3.1

Using Inception

Here we create a standard InceptionV3 pretrained model and use it on images by first preprocessing them with the preprocessing tools


In [2]:
inet_model = inc_net.InceptionV3()


Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.5/inception_v3_weights_tf_dim_ordering_tf_kernels.h5
96116736/96112376 [==============================] - 1s 0us/step

In [3]:
def transform_img_fn(path_list):
    out = []
    for img_path in path_list:
        img = image.load_img(img_path, target_size=(299, 299))
        x = image.img_to_array(img)
        x = np.expand_dims(x, axis=0)
        x = inc_net.preprocess_input(x)
        out.append(x)
    return np.vstack(out)

Let's see the top 5 prediction for some image


In [4]:
images = transform_img_fn([os.path.join('data','cat_mouse.jpg')])
# I'm dividing by 2 and adding 0.5 because of how this Inception represents images
plt.imshow(images[0] / 2 + 0.5)
preds = inet_model.predict(images)
for x in decode_predictions(preds)[0]:
    print(x)


Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/imagenet_class_index.json
40960/35363 [==================================] - 0s 0us/step
('n02133161', 'American_black_bear', 0.63716024)
('n02105056', 'groenendael', 0.03181813)
('n02104365', 'schipperke', 0.02994436)
('n01883070', 'wombat', 0.028509539)
('n01877812', 'wallaby', 0.02509348)

Explanation

Now let's get an explanation


In [7]:
%load_ext autoreload
%autoreload 2
import os,sys
try:
    import lime
except:
    sys.path.append(os.path.join('..', '..')) # add the current directory
    import lime
from lime import lime_image


The autoreload extension is already loaded. To reload it, use:
  %reload_ext autoreload

In [8]:
explainer = lime_image.LimeImageExplainer()

hide_color is the color for a superpixel turned OFF. Alternatively, if it is NONE, the superpixel will be replaced by the average of its pixels. Here, we set it to 0 (in the representation used by inception model, 0 means gray)


In [9]:
%%time
# Hide color is the color for a superpixel turned OFF. Alternatively, if it is NONE, the superpixel will be replaced by the average of its pixels
explanation = explainer.explain_instance(images[0], inet_model.predict, top_labels=5, hide_color=0, num_samples=1000)


CPU times: user 8min 7s, sys: 7.34 s, total: 8min 14s
Wall time: 4min 17s

Image classifiers are a bit slow. Notice that an explanation on my Surface Book dGPU took 1min 29s

Now let's see the explanation for the top class ( Black Bear)

We can see the top 5 superpixels that are most positive towards the class with the rest of the image hidden


In [10]:
from skimage.segmentation import mark_boundaries

In [11]:
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
plt.imshow(mark_boundaries(temp / 2 + 0.5, mask))


Out[11]:
<matplotlib.image.AxesImage at 0x7fddb3b572b0>

Or with the rest of the image present:


In [12]:
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=False)
plt.imshow(mark_boundaries(temp / 2 + 0.5, mask))


Out[12]:
<matplotlib.image.AxesImage at 0x7fddb3b3f8d0>

We can also see the 'pros and cons' (pros in green, cons in red)


In [13]:
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=10, hide_rest=False)
plt.imshow(mark_boundaries(temp / 2 + 0.5, mask))


Out[13]:
<matplotlib.image.AxesImage at 0x7fddb3a9d898>

Or the pros and cons that have weight at least 0.1


In [14]:
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=1000, hide_rest=False, min_weight=0.1)
plt.imshow(mark_boundaries(temp / 2 + 0.5, mask))


Out[14]:
<matplotlib.image.AxesImage at 0x7fddb3a7f9b0>

Alternatively, we can also plot explanation weights onto a heatmap visualization. The colorbar shows the values of the weights.


In [29]:
#Select the same class explained on the figures above.
ind =  explanation.top_labels[0]

#Map each explanation weight to the corresponding superpixel
dict_heatmap = dict(explanation.local_exp[ind])
heatmap = np.vectorize(dict_heatmap.get)(explanation.segments) 

#Plot. The visualization makes more sense if a symmetrical colorbar is used.
plt.imshow(heatmap, cmap = 'RdBu', vmin  = -heatmap.max(), vmax = heatmap.max())
plt.colorbar()


Out[29]:
<matplotlib.colorbar.Colorbar at 0x7fddb34d9da0>

Let's see the explanation for the second highest prediction

Most positive towards wombat:


In [15]:
temp, mask = explanation.get_image_and_mask(106, positive_only=True, num_features=5, hide_rest=True)
plt.imshow(mark_boundaries(temp / 2 + 0.5, mask))


Out[15]:
<matplotlib.image.AxesImage at 0x7fddb39e0940>

Pros and cons:


In [16]:
temp, mask = explanation.get_image_and_mask(106, positive_only=False, num_features=10, hide_rest=False)
plt.imshow(mark_boundaries(temp / 2 + 0.5, mask))


Out[16]:
<matplotlib.image.AxesImage at 0x7fddb39bda58>

In [ ]: