In this tutorial we will show how to use Lime framework with Pytorch. Specifically, we will use Lime to explain the prediction generated by one of the pretrained ImageNet models.
Let's start with importing our dependencies. This code is tested with Pytorch 1.0 but should work with older versions as well.
In [1]:
import matplotlib.pyplot as plt
from PIL import Image
import torch.nn as nn
import numpy as np
import os, json
import torch
from torchvision import models, transforms
from torch.autograd import Variable
import torch.nn.functional as F
Load our test image and see how it looks.
In [2]:
def get_image(path):
with open(os.path.abspath(path), 'rb') as f:
with Image.open(f) as img:
return img.convert('RGB')
img = get_image('./data/dogs.png')
plt.imshow(img)
Out[2]:
We need to convert this image to Pytorch tensor and also apply whitening as used by our pretrained model.
In [3]:
# resize and take the center part of image to what our model expects
def get_input_transform():
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transf = transforms.Compose([
transforms.Resize((256, 256)),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize
])
return transf
def get_input_tensors(img):
transf = get_input_transform()
# unsqeeze converts single image to batch of 1
return transf(img).unsqueeze(0)
Load the pretrained model for Resnet50 available in Pytorch.
In [4]:
model = models.inception_v3(pretrained=True)
Load label texts for ImageNet predictions so we know what model is predicting
In [5]:
idx2label, cls2label, cls2idx = [], {}, {}
with open(os.path.abspath('./data/imagenet_class_index.json'), 'r') as read_file:
class_idx = json.load(read_file)
idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]
cls2label = {class_idx[str(k)][0]: class_idx[str(k)][1] for k in range(len(class_idx))}
cls2idx = {class_idx[str(k)][0]: k for k in range(len(class_idx))}
Get the predicition for our image.
In [6]:
img_t = get_input_tensors(img)
model.eval()
logits = model(img_t)
Predicitions we got are logits. Let's pass that through softmax to get probabilities and class labels for top 5 predictions.
In [7]:
probs = F.softmax(logits, dim=1)
probs5 = probs.topk(5)
tuple((p,c, idx2label[c]) for p, c in zip(probs5[0][0].detach().numpy(), probs5[1][0].detach().numpy()))
Out[7]:
We are getting ready to use Lime. Lime produces the array of images from original input image by pertubation algorithm. So we need to provide two things: (1) original image as numpy array (2) classification function that would take array of purturbed images as input and produce the probabilities for each class for each image as output.
For Pytorch, first we need to define two separate transforms: (1) to take PIL image, resize and crop it (2) take resized, cropped image and apply whitening.
In [8]:
def get_pil_transform():
transf = transforms.Compose([
transforms.Resize((256, 256)),
transforms.CenterCrop(224)
])
return transf
def get_preprocess_transform():
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transf = transforms.Compose([
transforms.ToTensor(),
normalize
])
return transf
pill_transf = get_pil_transform()
preprocess_transform = get_preprocess_transform()
Now we are ready to define classification function that Lime needs. The input to this function is numpy array of images where each image is ndarray of shape (channel, height, width). The output is numpy aaray of shape (image index, classes) where each value in array should be probability for that image, class combination.
In [9]:
def batch_predict(images):
model.eval()
batch = torch.stack(tuple(preprocess_transform(i) for i in images), dim=0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
batch = batch.to(device)
logits = model(batch)
probs = F.softmax(logits, dim=1)
return probs.detach().cpu().numpy()
Let's test our function for the sample image.
In [10]:
test_pred = batch_predict([pill_transf(img)])
test_pred.squeeze().argmax()
Out[10]:
Import lime and create explanation for this prediciton.
In [11]:
from lime import lime_image
In [12]:
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(np.array(pill_transf(img)),
batch_predict, # classification function
top_labels=5,
hide_color=0,
num_samples=1000) # number of images that will be sent to classification function
Let's use mask on image and see the areas that are encouraging the top prediction.
In [13]:
from skimage.segmentation import mark_boundaries
In [14]:
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=False)
img_boundry1 = mark_boundaries(temp/255.0, mask)
plt.imshow(img_boundry1)
Out[14]:
Let's turn on areas that contributes against the top prediction.
In [15]:
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=10, hide_rest=False)
img_boundry2 = mark_boundaries(temp/255.0, mask)
plt.imshow(img_boundry2)
Out[15]: