In this lab, you will create a model the Pytroch way. This will help you more complicated models.
Import the following libraries:
In [ ]:
from torch import nn,optim
import torch
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from torch.utils.data import Dataset, DataLoader
Set the random seed:
In [ ]:
torch.manual_seed(1)
Use this function for plotting:
In [ ]:
def Plot_2D_Plane(model,dataset,n=0):
from mpl_toolkits.mplot3d import Axes3D
w1=model.state_dict()['linear.weight'].numpy()[0][0]
w2=model.state_dict()['linear.weight'].numpy()[0][0]
b=model.state_dict()['linear.bias'].numpy()
#data
x1 =data_set.x[:,0].view(-1,1).numpy()
x2 = data_set.x[:,1].view(-1,1).numpy()
y = data_set.y.numpy()
#make plane
X, Y = np.meshgrid(np.arange(x1.min(), x1.max(), 0.05), np.arange(x2.min(), x2.max(), 0.05))
yhat = w1*X+w2*Y+b
#plotting
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(x1[:,0],x2[:,0],y[:,0],'ro',label='y') #scatter plot
ax.plot_surface(X,Y,yhat) #plane plot
ax.set_xlabel('x1 ')
ax.set_ylabel('x2 ')
ax.set_zlabel('y')
#ax.set_ylim((y.min()-3, y.max()+3))
plt.title('estimated plane iteration:'+str(n))
ax.legend()
plt.show()
Create a dataset class with two-dimensional features:
In [ ]:
from torch.utils.data import Dataset, DataLoader
class Data2D(Dataset):
def __init__(self):
self.x=torch.zeros(20,2)
self.x[:,0]=torch.arange(-1,1,0.1)
self.x[:,1]=torch.arange(-1,1,0.1)
self.w=torch.tensor([ [1.0],[1.0]])
self.b=1
self.f=torch.mm(self.x,self.w)+self.b
self.y=self.f+0.1*torch.randn((self.x.shape[0],1))
self.len=self.x.shape[0]
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
Create a dataset object:
In [ ]:
data_set=Data2D()
Create a custom module:
In [ ]:
class linear_regression(nn.Module):
def __init__(self,input_size,output_size):
super(linear_regression,self).__init__()
self.linear=nn.Linear(input_size,output_size)
def forward(self,x):
yhat=self.linear(x)
return yhat
Create a model. Use two features: make the input size 2 and the output size 1:
In [ ]:
model=linear_regression(2,1)
Display the parameters:
In [ ]:
print(list(model.parameters()))
Create an optimizer object. Set the learning rate to 0.1. Don't forget to enter the model parameters in the constructor.
In [ ]:
optimizer = optim.SGD(model.parameters(), lr = 0.1)
Create the criterion function that calculates the total loss or cost:
In [ ]:
criterion = nn.MSELoss()
Create a data loader object. Set the batch_size equal to 2:
In [ ]:
train_loader=DataLoader(dataset=data_set,batch_size=2)
Run 100 epochs of Mini-Batch Gradient Descent and store the total loss or cost for every iteration. Remember that this is an approximation of the true total loss or cost:
In [ ]:
LOSS=[]
Plot_2D_Plane(model,data_set)
epochs=100
for epoch in range(epochs):
for x,y in train_loader:
#make a prediction
yhat=model(x)
#calculate the loss
loss=criterion(yhat,y)
#store loss/cost
LOSS.append(loss.item())
#clear gradient
optimizer.zero_grad()
#Backward pass: compute gradient of the loss with respect to all the learnable parameters
loss.backward()
#the step function on an Optimizer makes an update to its parameters
optimizer.step()
In [ ]:
Plot_2D_Plane(model,data_set,epoch)
In [ ]:
plt.plot(LOSS)
plt.xlabel("iterations ")
plt.ylabel("Cost/total loss ")
In [ ]:
Double-click here for the solution.
In [ ]:
torch.manual_seed(2)
validation_data=Data2D()
Y=validation_data.y
X=validation_data.x
Double-click here for the solution.
Joseph Santarcangelo has a PhD in Electrical Engineering. His research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition.
Other contributors: Michelle Carey, Mavis Zhou
Copyright © 2018 cognitiveclass.ai. This notebook and its source code are released under the terms of the MIT License.