We will do the following steps in order:
torchvision
Test the network on the test data
Loading and normalizing MNIST ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using torchvision
, it’s extremely easy to load MNIST.
import torch import torchvision import torchvision.transforms as transforms
The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1]. 60,000 training samples and 10,000 test samples
In [2]:
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.MNIST(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
Three hidden layers, input size = height * width of the image, output size = the number of classes (which is 10 in the case of MNIST)
Use the base class: nn.Module
The nn.Module mainly takes care of storing the paramters of the neural network.
In [4]:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(28 * 28, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# flatten image
x = x[:, 0, ...].view(-1, 28*28)
# feed layer 1
out_layer1 = self.fc1(x)
out_layer1 = F.relu(out_layer1)
# feed layer 2
out_layer2 = self.fc2(out_layer1)
out_layer2 = F.relu(out_layer2)
# feed layer 3
out_layer3 = self.fc3(out_layer2)
return out_layer3
net = Net()
In [5]:
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize.
In [12]:
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 99 == 0: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 100))
running_loss = 0.0
print('Finished Training')
We have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all.
We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
In [7]:
testset = torchvision.datasets.MNIST(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
Performance on the test dataset.
In [11]:
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
In [10]:
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
predictions = net(images)
_, predicted = torch.max(predictions.data, 1)
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % predicted[j].item() for j in range(4)))
In [ ]: