In [2]:
import torch
import numpy as np
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("We are using the following device for learning:",device)
Generate some tensors, convert to float, and copy them to the device (possibly the GPU). Then carry out an operation with the tensors and print the result.
In [3]:
a = torch.tensor([1,2,3]).float().to(device)
b = torch.tensor([5,6,7]).float().to(device)
result = a * b
print(a)
print(b)
print(result)
We can see that the objects a, b, and c are pyTorch tensors that also carry the result as such. We have built a computation graph linking variables a and b to the result via the (element-wise) multiplication and have the result immediately ready.
In [6]:
# generate a random number vector
d = np.random.randn(2,4)
# construct a pytorch tensor from numpy array
e = torch.from_numpy(d).float().to(device)
print(e)
In [ ]: