nngraph

nngraph makes it easier to express complex neural network architectures compared to just using containers like Sequential, Parallel, etc.


In [1]:
require 'nngraph';

The first type of new module you'll encounter is the Identity module. This module just takes in whatever's input and passes it on the next layer.


In [2]:
a = torch.Tensor{1,2,3}

Important note: The below code isn't nngraph yet. However, the Identity module is important for nngraph.


In [3]:
module1 = nn.Identity()

In [4]:
module1:forward(a)


Out[4]:
 1
 2
 3
[torch.DoubleTensor of size 3]

Here is how this would be written in nngraph:


In [5]:
-- Notice the extra (). The extra () contain properties of this module when embedded into a graph.
x1 = nn.Identity()()
m = nn.gModule({x1},{x1})

In [6]:
m:forward(a)


Out[6]:
 1
 2
 3
[torch.DoubleTensor of size 3]

gModule is the master module that indicates the input and output nodes of the graph. The above module's input and output node are both x1.

Let's try something more complex than this.
x = a + b


In [7]:
-- Declare some tensors
t1 = torch.Tensor{1,2,3}
t2 = torch.Tensor{3,4,5}

Without nngraph.


In [8]:
-- nn.CAddTable adds tensors together. Obviously, dimensions need to match.
a = nn.CAddTable()

In [9]:
a:forward({t1,t2})


Out[9]:
 4
 6
 8
[torch.DoubleTensor of size 3]

With nngraph.


In [10]:
a = nn.Identity()()
b = nn.Identity()()
x = nn.CAddTable()({a,b})
m = nn.gModule({a,b},{x})

In [11]:
m:forward({t1,t2})


Out[11]:
 4
 6
 8
[torch.DoubleTensor of size 3]

CSubTable, CMulTable

Element-wise subtraction and multiplication


In [12]:
a = nn.Identity()()
b = nn.Identity()()
x = nn.CSubTable()({a,b})
m = nn.gModule({a,b},{x})

In [13]:
m:forward({t1,t2})


Out[13]:
-2
-2
-2
[torch.DoubleTensor of size 3]


In [14]:
a = nn.Identity()()
b = nn.Identity()()
x = nn.CMulTable()({a,b})
m = nn.gModule({a,b},{x})

In [15]:
m:forward({t1,t2})


Out[15]:
  3
  8
 15
[torch.DoubleTensor of size 3]

SelectTable(index)

Select one element out of a table.


In [16]:
k = {5, torch.Tensor{1,2,3}}

In [17]:
k


Out[17]:
{
  1 : 5
  2 : DoubleTensor - size: 3
}

In [18]:
a = nn.Identity()()
x= nn.SelectTable(1)(a)
m = nn.gModule({a},{x})

In [19]:
m:forward(k)


Out[19]:
5	

Negative index


In [22]:
a = nn.Identity()()
x= nn.SelectTable(2)(a)
m = nn.gModule({a},{x})

In [23]:
m:forward(k)


Out[23]:
 1
 2
 3
[torch.DoubleTensor of size 3]

Narrow (dim, offset, size)

narrows down dimension dim from offset upto size.


In [25]:
nn.Narrow(1,2,3):forward(torch.Tensor(5,2):fill(1))


Out[25]:
 1  1
 1  1
 1  1
[torch.DoubleTensor of size 3x2]


In [24]:
torch.Tensor(5,2):fill(1)


Out[24]:
 1  1
 1  1
 1  1
 1  1
 1  1
[torch.DoubleTensor of size 5x2]

LookupTable(vocab_size, word_vec_size)

Creates a vector of size word_vec_size for each number from 1 to vocab_size.


In [26]:
m = nn.LookupTable(4, 5)

In [27]:
m:forward(torch.Tensor{4})


Out[27]:
 0.0633 -0.4810 -0.0053  0.3826 -1.7573
[torch.DoubleTensor of size 1x5]


In [28]:
m.weight


Out[28]:
 0.0432 -0.0334 -1.0834 -1.0515  0.4125
 1.4188  0.5871  1.0850  0.6336 -0.2546
-0.6490  0.9148 -0.1313 -0.9320 -1.0232
 0.0633 -0.4810 -0.0053  0.3826 -1.7573
[torch.DoubleTensor of size 4x5]

ConcatTable()

Insert modules in this module to make them parallel.


In [29]:
m = nn.ConcatTable()

In [30]:
m:add(nn.Linear(5,2));

In [31]:
m:add(nn.Linear(5,3));

In [24]:
m:forward(torch.randn(5))


Out[24]:
{
  1 : DoubleTensor - size: 2
  2 : DoubleTensor - size: 3
}

SplitTable(dim)

split a tensor into table of tensors along a given dimension


In [26]:
m = nn.SplitTable(1)

In [27]:
m:forward(torch.rand(3,2))


Out[27]:
{
  1 : DoubleTensor - size: 2
  2 : DoubleTensor - size: 2
  3 : DoubleTensor - size: 2
}

In [ ]: