Adapted from the manifold t-sne demo.
Install manifold package using:
luarocks install manifold
In [1]:
m = require 'manifold';
In [2]:
N = 2000
In [3]:
mnist = require 'mnist';
In [4]:
testset = mnist.testdataset()
In [5]:
testset
Out[5]:
Extract 2000 images from the test set for visualization.
In [6]:
testset.size = N
testset.data = testset.data[{{1,N}}]
testset.label = testset.label[{{1,N}}]
In [7]:
testset
Out[7]:
Flatten image tensor.
In [8]:
x = torch.DoubleTensor(testset.data:size()):copy(testset.data)
x:resize(x:size(1), x:size(2) * x:size(3))
labels = testset.label
In [9]:
x:size()
Out[9]:
ndims - t-SNE map of 2-dimensions
perplexity - a learning parameter (usually between 5 and 50)
use_bh - Whether to use the Barnes-Hut condition described here
pca - t-SNE may not be fast enough for very high dimensions so we first reduce the number of dimensions to something that can be handled by t-SNE. This dimensionality reduction is done by your favorite DR algorithm PCA.
theta - theta of the barnes-hut condition, described in Barnes-Hut t-SNE paper.
In [10]:
opts = {ndims = 2, perplexity = 30, pca = 50, use_bh = true, theta=0.5}
mapped_x1 = m.embedding.tsne(x, opts)
Out[10]:
Out[10]:
In [11]:
mapped_x1:size()
Out[11]:
In [12]:
im_size = 4096
map_im = m.draw_image_map(mapped_x1, x:resize(x:size(1), 1, 28, 28), im_size, 0, true)
Right-click and open image in new tab to properly view the digits.
In [13]:
itorch.image(map_im)
Out[13]: