In [1]:
%matplotlib inline

What is PyTorch?

It’s a Python based scientific computing package targeted at two sets of audiences:

  • A replacement for numpy to use the power of GPUs
  • a deep learning research platform that provides maximum flexibility and speed

Getting Started

Tensors ^^^^^^^

Tensors are similar to numpy’s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.


In [2]:
from __future__ import print_function
import torch

Construct a 5x3 matrix, uninitialized:


In [3]:
x = torch.Tensor(5, 3)
print(x)


1.00000e-10 *
  0.0000  4.6566  0.0000
  4.6566  0.0000  0.0000
  0.0000  4.6566  0.0000
  4.6566  0.0000  4.6566
  0.0000  0.0000  0.0000
[torch.FloatTensor of size 5x3]

Construct a randomly initialized matrix


In [4]:
x = torch.rand(5, 3)
print(x)


 0.5836  0.9306  0.4883
 0.4927  0.6504  0.7656
 0.6796  0.5058  0.4498
 0.4805  0.8398  0.3648
 0.1439  0.2373  0.8640
[torch.FloatTensor of size 5x3]

Get its size


In [5]:
print(x.size())


torch.Size([5, 3])

Note

``torch.Size`` is in fact a tuple, so it supports the same operations

Operations ^^^^^^^^^^ There are multiple syntaxes for operations. Let's see addition as an example

Addition: syntax 1


In [6]:
y = torch.rand(5, 3)
print(x + y)


 0.6433  1.6999  1.3145
 0.7993  0.9306  1.6244
 1.1707  1.3203  0.9556
 0.6730  1.0785  0.6450
 0.2143  0.7599  1.1299
[torch.FloatTensor of size 5x3]

Addition: syntax 2


In [7]:
print(torch.add(x, y))


 0.6433  1.6999  1.3145
 0.7993  0.9306  1.6244
 1.1707  1.3203  0.9556
 0.6730  1.0785  0.6450
 0.2143  0.7599  1.1299
[torch.FloatTensor of size 5x3]

Addition: giving an output tensor


In [8]:
result = torch.Tensor(5, 3)
torch.add(x, y, out=result)
print(result)


 0.6433  1.6999  1.3145
 0.7993  0.9306  1.6244
 1.1707  1.3203  0.9556
 0.6730  1.0785  0.6450
 0.2143  0.7599  1.1299
[torch.FloatTensor of size 5x3]

Addition: in-place


In [9]:
# adds x to y
y.add_(x)
print(y)


 0.6433  1.6999  1.3145
 0.7993  0.9306  1.6244
 1.1707  1.3203  0.9556
 0.6730  1.0785  0.6450
 0.2143  0.7599  1.1299
[torch.FloatTensor of size 5x3]

Note

Any operation that mutates a tensor in-place is post-fixed with an ``_`` For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.

You can use standard numpy-like indexing with all bells and whistles!


In [10]:
print(x[:, 1])


 0.9306
 0.6504
 0.5058
 0.8398
 0.2373
[torch.FloatTensor of size 5]

Read later:

100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc are described here <http://pytorch.org/docs/torch>_

Numpy Bridge

Converting a torch Tensor to a numpy array and vice versa is a breeze.

The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other.

Converting torch Tensor to numpy Array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


In [11]:
a = torch.ones(5)
print(a)


 1
 1
 1
 1
 1
[torch.FloatTensor of size 5]


In [12]:
b = a.numpy()
print(b)


[ 1.  1.  1.  1.  1.]

See how the numpy array changed in value.


In [13]:
a.add_(1)
print(a)
print(b)


 2
 2
 2
 2
 2
[torch.FloatTensor of size 5]

[ 2.  2.  2.  2.  2.]

Converting numpy Array to torch Tensor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See how changing the np array changed the torch Tensor automatically


In [14]:
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)


[ 2.  2.  2.  2.  2.]

 2
 2
 2
 2
 2
[torch.DoubleTensor of size 5]

All the Tensors on the CPU except a CharTensor support converting to NumPy and back.

CUDA Tensors

Tensors can be moved onto GPU using the .cuda function.


In [15]:
# let us run this cell only if CUDA is available
if torch.cuda.is_available():
    x = x.cuda()
    y = y.cuda()
    x + y