SINGA Core Classes

Device

A device instance represents a hardware device with multiple execution units, e.g.,

  • A GPU which has multile cuda streams
  • A CPU which has multiple threads

All data structures (variables) are allocated on a device instance. Consequently, all operations are executed on the resident device.

Create a device instance


In [1]:
from singa import device
default_dev = device.get_default_device()
gpu = device.create_cuda_gpu()  # the first gpu device
gpu


Out[1]:
<singa.singa_wrap.Device; proxy of <Swig Object of type 'std::shared_ptr< singa::Device > *' at 0x7f69a05ff330> >

NOTE: currently we can only call the creating function once due to the cnmem restriction.


In [ ]:
gpu = device.create_cuda_gpu_on(1)  # use the gpu device with the specified GPU ID
gpu_list1 = device.create_cuda_gpus(2)  # the first two gpu devices
gpu_list2 = device.create_cuda_gpus([0,2]) # create the gpu instances on the given GPU IDs
opencl_gpu = device.create_opencl_device()  # valid if SINGA is compiled with USE_OPENCL=ON

In [2]:
device.get_num_gpus()


Out[2]:
3

In [3]:
device.get_gpu_ids()


Out[3]:
(0, 1, 2)

Tensor

A tensor instance represents a multi-dimensional array allocated on a device instance. It provides linear algbra operations, like +, -, *, /, dot, pow ,etc

NOTE: class memeber functions are inplace; global functions are out-of-place.

Create tensor instances


In [4]:
from singa import tensor
import numpy as np
a = tensor.Tensor((2, 3))
a.shape


Out[4]:
(2, 3)

In [5]:
a.device


Out[5]:
<singa.singa_wrap.Device; proxy of <Swig Object of type 'std::shared_ptr< singa::Device > *' at 0x7f69a02f8a50> >

In [6]:
gb = tensor.Tensor((2, 3), gpu)

In [7]:
gb.device


Out[7]:
<singa.singa_wrap.Device; proxy of <Swig Object of type 'std::shared_ptr< singa::Device > *' at 0x7f69a05ff330> >

Initialize tensor values


In [8]:
a.set_value(1.2)
gb.gaussian(0, 0.1)

To and from numpy


In [9]:
tensor.to_numpy(a)


Out[9]:
array([[ 1.20000005,  1.20000005,  1.20000005],
       [ 1.20000005,  1.20000005,  1.20000005]], dtype=float32)

In [10]:
tensor.to_numpy(gb)


Out[10]:
array([[ 0.24042693, -0.21337385, -0.0969397 ],
       [-0.010797  , -0.07642138, -0.09220808]], dtype=float32)

In [11]:
c = tensor.from_numpy(np.array([1,2], dtype=np.float32))
c.shape


Out[11]:
(2,)

In [12]:
c.copy_from_numpy(np.array([3,4], dtype=np.float32))
tensor.to_numpy(c)


Out[12]:
array([ 3.,  4.], dtype=float32)

Move tensor between devices


In [13]:
gc = c.clone()
gc.to_device(gpu)
gc.device


Out[13]:
<singa.singa_wrap.Device; proxy of <Swig Object of type 'std::shared_ptr< singa::Device > *' at 0x7f69a05ff330> >

In [14]:
b = gb.clone()
b.to_host()  # the same as b.to_device(default_dev)
b.device


Out[14]:
<singa.singa_wrap.Device; proxy of <Swig Object of type 'std::shared_ptr< singa::Device > *' at 0x7f69a02f8a50> >

Operations

NOTE: tensors should be initialized if the operation would read the tensor values

Summary


In [15]:
gb.l1()


Out[15]:
0.12169448286294937

In [16]:
a.l2()


Out[16]:
0.4898979663848877

In [17]:
e = tensor.Tensor((2, 3))
e.is_empty()


Out[17]:
False

In [18]:
gb.size()


Out[18]:
6L

In [19]:
gb.memsize()


Out[19]:
24L

In [20]:
# note we can only support matrix multiplication for tranposed tensors; 
# other operations on transposed tensor would result in errors
c.is_transpose()


Out[20]:
False

In [21]:
et=e.T()
et.is_transpose()


Out[21]:
True

In [22]:
et.shape


Out[22]:
(3L, 2L)

In [23]:
et.ndim()


Out[23]:
2L

Member functions (in-place)

These functions would change the content of the tensor


In [24]:
a += b
tensor.to_numpy(a)


Out[24]:
array([[ 1.44042695,  0.98662621,  1.10306036],
       [ 1.18920302,  1.12357867,  1.10779202]], dtype=float32)

In [25]:
a -= b
tensor.to_numpy(a)


Out[25]:
array([[ 1.20000005,  1.20000005,  1.20000005],
       [ 1.20000005,  1.20000005,  1.20000005]], dtype=float32)

In [26]:
a *= 2
tensor.to_numpy(a)


Out[26]:
array([[ 2.4000001,  2.4000001,  2.4000001],
       [ 2.4000001,  2.4000001,  2.4000001]], dtype=float32)

In [27]:
a /= 3
tensor.to_numpy(a)


Out[27]:
array([[ 0.80000007,  0.80000007,  0.80000007],
       [ 0.80000007,  0.80000007,  0.80000007]], dtype=float32)

In [28]:
d = tensor.Tensor((3,))
d.uniform(-1,1)
tensor.to_numpy(d)


Out[28]:
array([ 0.62944734, -0.72904599,  0.81158388], dtype=float32)

In [29]:
a.add_row(d)
tensor.to_numpy(a)


Out[29]:
array([[ 1.42944741,  0.07095408,  1.61158395],
       [ 1.42944741,  0.07095408,  1.61158395]], dtype=float32)

Global functions (out of place)

These functions would not change the memory of the tensor, instead they return a new tensor

Unary functions


In [30]:
h = tensor.sign(d)
tensor.to_numpy(h)


Out[30]:
array([ 1., -1.,  1.], dtype=float32)

In [31]:
tensor.to_numpy(d)


Out[31]:
array([ 0.62944734, -0.72904599,  0.81158388], dtype=float32)

In [32]:
h = tensor.abs(d)
tensor.to_numpy(h)


Out[32]:
array([ 0.62944734,  0.72904599,  0.81158388], dtype=float32)

In [33]:
h = tensor.relu(d)
tensor.to_numpy(h)


Out[33]:
array([ 0.62944734,  0.        ,  0.81158388], dtype=float32)

In [34]:
g = tensor.sum(a, 0)
g.shape


Out[34]:
(3L,)

In [35]:
g = tensor.sum(a, 1)
g.shape


Out[35]:
(2L,)

In [36]:
tensor.bernoulli(0.5, g)
tensor.to_numpy(g)


Out[36]:
array([ 1.,  0.], dtype=float32)

In [37]:
g.gaussian(0, 0.2)
tensor.gaussian(0, 0.2, g)
tensor.to_numpy(g)


Out[37]:
array([-0.12226005, -0.05827543], dtype=float32)

Binary functions


In [38]:
f = a + b
tensor.to_numpy(f)


Out[38]:
array([[ 1.66987431, -0.14241977,  1.51464427],
       [ 1.41865039, -0.0054673 ,  1.51937592]], dtype=float32)

In [39]:
g = a < b
tensor.to_numpy(g)


Out[39]:
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]], dtype=float32)

In [40]:
tensor.add_column(2, c, 1, f)   # f = 2 *c + 1* f
tensor.to_numpy(f)


Out[40]:
array([[ 7.66987419,  5.85758018,  7.51464415],
       [ 9.41865063,  7.99453259,  9.5193758 ]], dtype=float32)

BLAS

BLAS function may change the memory of input tensor


In [41]:
tensor.axpy(2, a, f)  # f = 2a + f
tensor.to_numpy(b)


Out[41]:
array([[ 0.24042693, -0.21337385, -0.0969397 ],
       [-0.010797  , -0.07642138, -0.09220808]], dtype=float32)

In [42]:
f = tensor.mult(a, b.T())
tensor.to_numpy(f)


Out[42]:
array([[ 0.17231143, -0.16945721],
       [ 0.17231143, -0.16945721]], dtype=float32)

In [43]:
tensor.mult(a, b.T(), f, 2, 1)  # f = 2a*b.T() + 1f
tensor.to_numpy(f)


Out[43]:
array([[ 0.51693428, -0.50837165],
       [ 0.51693428, -0.50837165]], dtype=float32)