Pytorch中的基础操作

Pytorch的设计哲学:

  1. GPU版的Numpy
  2. 高效灵活的深度学习平台

Pytorch has two design philosophies.

I. A replacement for Numpy to use the powers of GPU.

II. A fast and flexible platform for deep learning research

numpy中运算,函数,索引的规则和方法可以无缝应用到pytorch的tensor操作。但需要特别注意的一点是,pytorch中in place操作如果有下划线的话,会改变调用该操作的变量。具体参见下面的例子。

基本函数

  1. In-place functions are always post-fixed with "" will mutate the object! But if the function is not post-fixed with "", then the object will not be mutated. See the example below.

  2. Zoo of manipulations for torch can be found here

If a variable "x" is defined by being assigned with another variable "y", the in-place operation which changes "x" will also change "y". From this, we should be careful with combination of assignment and in_place operations.


In [1]:
import torch

In [25]:
x = torch.rand(4, 3) #randomly generated tensors
print x
x.t_() #transpose of x, note that x has been changed!
print x


 0.8031  0.0763  0.4798
 0.6118  0.6341  0.4783
 0.0423  0.9399  0.1805
 0.0696  0.5616  0.8898
[torch.FloatTensor of size 4x3]


 0.8031  0.6118  0.0423  0.0696
 0.0763  0.6341  0.9399  0.5616
 0.4798  0.4783  0.1805  0.8898
[torch.FloatTensor of size 3x4]


In [26]:
print x.t()
print x #the x is still the previous modified, which is not mutated by x.t()


 0.8031  0.0763  0.4798
 0.6118  0.6341  0.4783
 0.0423  0.9399  0.1805
 0.0696  0.5616  0.8898
[torch.FloatTensor of size 4x3]


 0.8031  0.6118  0.0423  0.0696
 0.0763  0.6341  0.9399  0.5616
 0.4798  0.4783  0.1805  0.8898
[torch.FloatTensor of size 3x4]


In [27]:
z = torch.Tensor(3, 4)
z.copy_(x) #copy x to z
print x
print z


 0.8031  0.6118  0.0423  0.0696
 0.0763  0.6341  0.9399  0.5616
 0.4798  0.4783  0.1805  0.8898
[torch.FloatTensor of size 3x4]


 0.8031  0.6118  0.0423  0.0696
 0.0763  0.6341  0.9399  0.5616
 0.4798  0.4783  0.1805  0.8898
[torch.FloatTensor of size 3x4]


In [29]:
print z.add_(x)
print x
print z #note that z is replaced with the sum, but x stays the same


 2.4094  1.8353  0.1268  0.2087
 0.2290  1.9023  2.8198  1.6848
 1.4393  1.4348  0.5415  2.6693
[torch.FloatTensor of size 3x4]


 0.8031  0.6118  0.0423  0.0696
 0.0763  0.6341  0.9399  0.5616
 0.4798  0.4783  0.1805  0.8898
[torch.FloatTensor of size 3x4]


 2.4094  1.8353  0.1268  0.2087
 0.2290  1.9023  2.8198  1.6848
 1.4393  1.4348  0.5415  2.6693
[torch.FloatTensor of size 3x4]


In [30]:
## the add() operation is not changing either x or zz
zz = x
print zz.add(x)
print x
print zz


 1.6063  1.2235  0.0845  0.1391
 0.1527  1.2682  1.8798  1.1232
 0.9595  0.9565  0.3610  1.7796
[torch.FloatTensor of size 3x4]


 0.8031  0.6118  0.0423  0.0696
 0.0763  0.6341  0.9399  0.5616
 0.4798  0.4783  0.1805  0.8898
[torch.FloatTensor of size 3x4]


 0.8031  0.6118  0.0423  0.0696
 0.0763  0.6341  0.9399  0.5616
 0.4798  0.4783  0.1805  0.8898
[torch.FloatTensor of size 3x4]


In [31]:
### note that both x and zzz have been changed

zzz = x
print zzz.add_(x)
print x
print zzz


 1.6063  1.2235  0.0845  0.1391
 0.1527  1.2682  1.8798  1.1232
 0.9595  0.9565  0.3610  1.7796
[torch.FloatTensor of size 3x4]


 1.6063  1.2235  0.0845  0.1391
 0.1527  1.2682  1.8798  1.1232
 0.9595  0.9565  0.3610  1.7796
[torch.FloatTensor of size 3x4]


 1.6063  1.2235  0.0845  0.1391
 0.1527  1.2682  1.8798  1.1232
 0.9595  0.9565  0.3610  1.7796
[torch.FloatTensor of size 3x4]

跟Numpy的联接

pytorch跟numpy的衔接还表现在二者之间的轻松转换,Pytorch中tensor支持所有numpy中的索引操作,并包含几乎所有基础操作函数。

As a replacement for Numpy, Pytorch is nicely connected to Numpy. It supports all numpy-style indexing, and its tensor datatype can be simply converted to numpy ndarrays. For example:


In [33]:
a= torch.ones(3)
print a


 1
 1
 1
[torch.FloatTensor of size 3]


In [39]:
b = a.numpy()
c = torch.from_numpy(b)
print type(b), b  #b is numpy array converted from a
print type(a), a  #a is torch tensor
print type(c), c  #c is torch tensor converted from b


<type 'numpy.ndarray'> [ 1.  1.  1.]
<class 'torch.FloatTensor'> 
 1
 1
 1
[torch.FloatTensor of size 3]

<class 'torch.FloatTensor'> 
 1
 1
 1
[torch.FloatTensor of size 3]

注意上例中a和b共享内存,当对a进行in place操作时,不仅a的值发生了变化,b也跟着变了。

Converting a torch Tensor to a numpy array and vice versa is a breeze. The torch Tensor and numpy array will share their underlying memory locations, and changing one will change the other.


In [40]:
a.add_(2)
print a, b  #Note that both a and b are changed!


 3
 3
 3
[torch.FloatTensor of size 3]
 [ 3.  3.  3.]

Torch tensors can be moved to GPU using .cuda function


In [22]:
if torch.cuda.is_available():
    a = a.cuda()
    b = b.cuda()
    c = a + b

In [ ]: