Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0.
In this notebook, we are going to use the tensor module from PySINGA to train a linear regression model. We use this example to illustrate the usage of tensor of PySINGA. Please refer the documentation page to for more tensor functions provided by PySINGA.
In [1]:
from __future__ import division
from __future__ import print_function
from builtins import range
from past.utils import old_div
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
To import the tensor module of PySINGA, run
In [2]:
from singa import tensor
In [3]:
a, b = 3, 2
f = lambda x: a * x + b
gx = np.linspace(0.,1,100)
gy = [f(x) for x in gx]
plt.plot(gx, gy, label='y=f(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='best')
Out[3]:
In [4]:
nb_points = 30
# generate training data
train_x = np.asarray(np.random.uniform(0., 1., nb_points), np.float32)
train_y = np.asarray(f(train_x) + np.random.rand(30), np.float32)
plt.plot(train_x, train_y, 'bo', ms=7)
Out[4]:
Assuming that we know the training data points are sampled from a line, but we don't know the line slope and intercept. The training is then to learn the slop (k) and intercept (b) by minimizing the error, i.e. ||kx+b-y||^2.
In [5]:
def plot(idx, x, y):
global gx, gy, axes
# print the ground truth line
axes[idx//5, idx%5].plot(gx, gy, label='y=f(x)')
# print the learned line
axes[idx//5, idx%5].plot(x, y, label='y=kx+b')
axes[idx//5, idx%5].legend(loc='best')
# set hyper-parameters
max_iter = 15
alpha = 0.05
# init parameters
k, b = 2.,0.
SINGA tensor module supports basic linear algebra operations, like + - * /
, and advanced functions including axpy, gemm, gemv, and random function (e.g., Gaussian and Uniform).
SINGA Tensor instances could be created via tensor.Tensor() by specifying the shape, and optionally the device and data type. Note that every Tensor instance should be initialized (e.g., via set_value() or random functions) before reading data from it. You can also create Tensor instances from numpy arrays,
Users cannot read a single cell of the Tensor instance. To read a single cell, users need to convert the Tesnor into a numpy array.
In [6]:
# to plot the intermediate results
fig, axes = plt.subplots(3, 5, figsize=(12, 8))
x = tensor.from_numpy(train_x)
y = tensor.from_numpy(train_y)
# sgd
for idx in range(max_iter):
y_ = x * k + b
err = y_ - y
loss = old_div(tensor.sum(err * err), nb_points)
print('loss at iter %d = %f' % (idx, loss))
da1 = old_div(tensor.sum(err * x), nb_points)
db1 = old_div(tensor.sum(err), nb_points)
# update the parameters
k -= da1 * alpha
b -= db1 * alpha
plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))
We can see that the learned line is becoming closer to the ground truth line (in blue color).
In [7]:
# to plot the intermediate results
fig, axes = plt.subplots(3, 5, figsize=(12, 8))
x = tensor.from_numpy(train_x)
y = tensor.from_numpy(train_y)
# sgd
for idx in range(max_iter):
y_ = x * k + b
err = y_ - y
loss = old_div(tensor.sum(err * err), nb_points)
print('loss at iter %d = %f' % (idx, loss))
da1 = old_div(tensor.sum(err * x), nb_points)
db1 = old_div(tensor.sum(err), nb_points)
# update the parameters
k -= da1 * alpha
b -= db1 * alpha
plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))
In [ ]: