To define any tensor, we use the torch.Tensor function. For example, the following defines a vector with 4 elements:
In [1]:
a = torch.Tensor(4)
Torch will usually initialize your vector with garbage values. So, you'll see this when you print out the vector's value.
In [2]:
print(a)
Out[2]:
In itorch, you need not use print to print out the value of a variable. You can just type the name of the variable as follows:
In [3]:
a
Out[3]:
To declare a matrix, you would do:
In [4]:
b = torch.Tensor(2,2)
In [5]:
b
Out[5]:
For a 3-dimensional tensor, you would go:
In [6]:
c = torch.Tensor(2,2,2)
In [7]:
c
Out[7]:
And so on ....
In [8]:
a:size()
Out[8]:
In [9]:
b:size()
Out[9]:
In [10]:
c:size()
Out[10]:
In [11]:
c:normal()
Out[11]:
Obviously, since this is a randomly initialized variable you should see a different output everytime.
You can fill your tensor with any value like this:
In [12]:
c:fill(3.14)
Out[12]:
A note on : and .
a:function() is just syntactic sugar for a.function(a).
So, c:fill(2.14) can be written as c.fill(c, 2.14).
In [13]:
c.fill(c, 2.14)
Out[13]:
So, this actually uses the passed object, c in this case. So, if you look at the contents of c, it must be changed.
In [14]:
c
Out[14]:
Other ways
There are other ways to create tensors with randomly generated values. rand returns values in the range [0,1) from uniform distribution whereas randn returns values from a normal distribution with mean zero and variance 1 .
In [115]:
torch.rand(2,2)
Out[115]:
In [116]:
torch.randn(2,2)
Out[116]:
In [16]:
b:normal()
Out[16]:
To peek through at the 1st row and 2nd column of b, we do this:
In [21]:
b[{1,2}]
Out[21]:
or this:
In [20]:
b[1][2]
Out[20]:
The [m][n][p] method is a less general form of slicing and can only point to scalars. If you want to slice out a smaller tensor from a bigger tensor, you should use the {} method.
Here are some examples of {} at work.
In [29]:
b[{{1}}]
Out[29]:
This selects the first row of b.
In [31]:
b[{{1,2}}]
Out[31]:
This selects the 1st and 2nd rows of b which is just the full matrix b.
In [34]:
b[{1,{2}}]
Out[34]:
In [35]:
b[{{1},2}]
Out[35]:
In [36]:
b[{{1},{2}}]
Out[36]:
Notice the difference between b[{1,2}], b[{1,{2}}], b[{{1},2}], and b[{{1},{2}}]. Pay attention to the size printed below the output. If you add an extra {} inside the outer {}, usually get back a tensor for that dimension. Always pay attention to the dimensions of your output. A lot of bugs happen because of this.
More detailed examples of this can be found here. I would suggest you to try these examples in another itorch notebook.
Here is a very general example:
In [37]:
d = torch.randn(3,4,5)
In [43]:
d
Out[43]:
In [38]:
d[{1,{2,3},{3,5}}]
Out[38]:
Notice how this is different from d[{{1},{2,3},{3,5}}] (Look at the size line).
In [39]:
d[{{1},{2,3},{3,5}}]
Out[39]:
Both of them select the 1st element of 1st dimension, 2nd and 3rd elements of 2nd dimension and 3rd to 5th elements of 3rd dimension but the second piece of code returns a 1x2x3 tensor and the first one returns a 2x3 tensor.
Another way to slice is the select command but it isn't as general as the {} method. You can reduce one dimension from a tensor using this. Therefore, it doesn't work on 1D tensors. It's documentation is here.
For example:
In [52]:
-- tensor_name:select(dimension, index)
d:select(1,2)
Out[52]:
This selects 2nd element of the 1st dimension which is a 4x5 matrix. Each element of the 1st dimension is actually a 4x5 matrix as can be seen in the output for d above.
Try to understand what this might be doing:
In [53]:
d:select(2,2)
Out[53]:
There are many methods of extracting a smaller tensor from a bigger tensor and they are described here.
Let's first create two tensors. First one contains just ones and in the second we fill 2s.
In [98]:
e = torch.ones(2,2)
In [99]:
f = torch.Tensor(2,2):fill(2)
In [100]:
e
Out[100]:
In [101]:
f
Out[101]:
Addition
This is one simple way to add tensors:
In [102]:
e+f
Out[102]:
This doesn't affect the values of the addends themselves as can be seen below.
In [103]:
e
Out[103]:
In [104]:
f
Out[104]:
Another way to write this is:
In [105]:
torch.add(e,f)
Out[105]:
The following however does affect the value of e.
In [106]:
e:add(f)
Out[106]:
In [107]:
e
Out[107]:
In [108]:
f
Out[108]:
Adding scalars
To add scalars to tensors use the same functions.
In [109]:
e+1
Out[109]:
In [110]:
e
Out[110]:
In [111]:
torch.add(e,1)
Out[111]:
In [112]:
e
Out[112]:
In [113]:
e:add(1)
Out[113]:
In [114]:
e
Out[114]:
norm This function returns norm of a tensor. This norm is the L2 norm.
In [118]:
g = torch.ones(2,2)
In [120]:
g
Out[120]:
In [121]:
-- sqrt(1^2 + 1^2 + 1^2 + 1^2) = 2
g:norm()
Out[121]:
In [122]:
g
Out[122]:
In [ ]: