Transfer Learning

In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on ImageNet available from torchvision.

ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please watch this.

Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.

With torchvision.models you can download these pre-trained networks and use them in your applications. We'll include models in our imports now.


In [ ]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import matplotlib.pyplot as plt

import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models

Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are [0.485, 0.456, 0.406] and the standard deviations are [0.229, 0.224, 0.225].


In [ ]:
data_dir = 'Cat_Dog_data'

# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([transforms.RandomRotation(30),
                                       transforms.RandomResizedCrop(224),
                                       transforms.RandomHorizontalFlip(),
                                       transforms.ToTensor(),
                                       transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                                            std=[0.229, 0.224, 0.225])])

test_transforms = transforms.Compose([transforms.Resize(255),
                                      transforms.CenterCrop(224),
                                      transforms.ToTensor(),
                                      transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                                           std=[0.229, 0.224, 0.225])])

# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train',
                                  transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test',
                                 transform=test_transforms)

trainloader = torch.utils.data.DataLoader(train_data,
                                          batch_size=64,
                                          shuffle=True)
testloader = torch.utils.data.DataLoader(test_data,
                                         batch_size=64)

We can load in a model such as DenseNet. Let's print out the model architecture so we can see what's going on.


In [ ]:
model = models.densenet121(pretrained=True)
model

This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer (classifier): Linear(in_features=1024, out_features=1000). This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers.


In [ ]:
# Freeze parameters so we don't backprop through them
for param in model.parameters():
    param.requires_grad = False

from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
                          ('fc1', nn.Linear(1024, 500)),
                          ('relu', nn.ReLU()),
                          ('fc2', nn.Linear(500, 2)),
                          ('output', nn.LogSoftmax(dim=1))
                          ]))
    
model.classifier = classifier

With our model built, we need to train the classifier. However, now we're using a really deep neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.

PyTorch, along with pretty much every other deep learning framework, uses CUDA to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using model.to('cuda'). You can move them back from the GPU with model.to('cpu') which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU.


In [ ]:
import time

In [ ]:
for device in ['cpu', 'cuda']:

    criterion = nn.NLLLoss()
    # Only train the classifier parameters, feature parameters are frozen
    optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)

    model.to(device)

    for ii, (inputs, labels) in enumerate(trainloader):

        # Move input and label tensors to the GPU
        inputs, labels = inputs.to(device), labels.to(device)

        start = time.time()

        outputs = model.forward(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        if ii==3:
            break
        
    print(f"Device = {device}; Time per batch: {(time.time() - start) / 3:.3f} seconds")

You can write device agnostic code which will automatically use CUDA if it's enabled like so:

# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

...

# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)

From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.

Exercise: Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen.


In [ ]:
# Use GPU is available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

In [ ]:
device

In [ ]:
## TODO: Use a pretrained model to classify the cat and dog images
### The last layer of ResNet 50 is called fc.
model = models.densenet121(pretrained=True)

In [ ]:
# Freezing the layers by turning off the gradients
for param in model.parameters():
    param.requires_grad = False

In [ ]:
# Building the classifier part of the network
# It will replace the last part of the network
classifier = nn.Sequential(nn.Linear(in_features=1024, out_features=128),
                           nn.ReLU(),
                           nn.Dropout(p=0.2),
                           nn.Linear(in_features=128, out_features=2),
                           nn.LogSoftmax(dim=1))

model.classifier = classifier

In [ ]:
# Defining the loss
criterion = nn.NLLLoss()

In [ ]:
# Defining the optimizer
# Optimizing only the fc.part!
optimizer = optim.Adam(params=model.classifier.parameters(),
                       lr=0.003)

In [ ]:
# Moving the model to the device
model.to(device)

In [ ]:
# Hyperparameter
epochs = 1
steps = 0
running_loss = 0
print_every_steps = 5

In [ ]:
for epoch in range(epochs):
    for images, labels in trainloader:
        steps += 1
        
        # Moving the data to the device
        images = images.to(device)
        labels = labels.to(device)
        
        # Resetting the gradient accumulator
        optimizer.zero_grad()
        # Calculating the log of the probabilities
        log_probs = model(images)
        # Calculating the loss
        loss = criterion(log_probs, labels)
        # Backward pass
        loss.backward()
        # Applying the backpropagation
        optimizer.step()
        
        running_loss += loss.item()
        
        if steps % print_every_steps == 0:
            # Changing mode to the evaluation mode
            # That results in turning of dropout
            model.eval()
            
            test_loss = 0
            accuracy = 0
            
            # Test data
            for images_test, labels_test in testloader:
                # Transfering the data to the device
                images_test = images_test.to(device)
                labels_test = labels_test.to(device)
                # Calculating the log probabilities
                log_probs_test = model(images_test)
                # Calculating the loss
                loss = criterion(log_probs_test, labels_test)
                test_loss += loss.item()
                
                # Calculating the probabilities for each class
                probs_test = torch.exp(log_probs_test)
                # Obtaining the class
                top_ps_test, top_class_test = probs_test.topk(k=1, dim=1)
                equality_test = top_class_test == labels_test.view(*top_class_test.shape)
                accuracy += torch.mean(equality_test.type(torch.FloatTensor)).item()
            # Final print
            print(f"Epoch {epoch+1}/{epochs}.. "
                  f"Train loss: {running_loss / print_every_steps:.3f}.. "
                  f"Test loss: {test_loss / len(testloader):.3f}.. "
                  f"Test accuracy: {accuracy / len(testloader):.3f}")
            running_loss = 0
            # Changing mode back to the training mode
            model.train()