PyTorch Tutorial - Part 1

This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).

This code illustrates

  • Get accustomed to the basics of pytorch
  • Do simple operations

In [2]:
import torch
import numpy as np

device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("We are using the following device for learning:",device)


We are using the following device for learning: cpu

Generate some tensors, convert to float, and copy them to the device (possibly the GPU). Then carry out an operation with the tensors and print the result.


In [3]:
a = torch.tensor([1,2,3]).float().to(device)
b = torch.tensor([5,6,7]).float().to(device)
result = a * b
print(a)
print(b)
print(result)


tensor([1., 2., 3.])
tensor([5., 6., 7.])
tensor([ 5., 12., 21.])

We can see that the objects a, b, and c are pyTorch tensors that also carry the result as such. We have built a computation graph linking variables a and b to the result via the (element-wise) multiplication and have the result immediately ready.


In [6]:
# generate a random number vector
d = np.random.randn(2,4)

# construct a pytorch tensor from numpy array
e = torch.from_numpy(d).float().to(device)
print(e)


tensor([[-0.2350, -0.0184, -0.1093,  0.8630],
        [ 0.7927,  1.0190,  0.9225,  0.0544]])

In [ ]: