It’s impossible to get anything done if we can’t manipulate data.
Generally, there are two important things we need to do with it:
(i) acquire it!
(ii) process it once it’s inside the computer.
There’s no point in trying to acquire data if we don’t even know how to store it, so let’s get our hands dirty first by playing with synthetic data.

We’ll start by introducing NDArrays, MXNet’s primary tool for storing and transforming data.
If you’ve worked with NumPy before, you’ll notice that NDArrays are, by design, similar to NumPy’s multi-dimensional array.
However, they confer a few key advantages.
First, NDArrays support asynchronous computation on CPU, GPU, and distributed cloud architectures.
Second, they provide support for automatic differentiation.
These properties make NDArray an ideal library for machine learning, both for researchers and engineers launching production systems.

In this chapter, we’ll get you going with the basic functionality.
Don’t worry if you don’t understand any of the basic math, like element-wise operations or normal distributions.
In the next two chapters we’ll take another pass at NDArray, teaching you both the math you’ll need and how to realize it in code.
To get started, let’s import mxnet.
We’ll also import ndarray from mxnet for convenience.
We’ll make a habit of setting a random seed so that you always get the same results that we do.


In [141]:
import mxnet as mx
from mxnet import nd
mx.random.seed(1)

Next, let's see how to create an NDArray without any values initialized.
Specifically, we'll create a 2D array (a matrix) with 3 rows and 4 columns.


In [142]:
x = nd.empty((3, 4))
print(x)


[[ 0.  0.  0.  0.]
 [ 0.  0.  0.  0.]
 [ 0.  0.  0.  0.]]
<NDArray 3x4 @cpu(0)>

The empty method just grabs some memory and hands us back a matrix without setting the values of any of its entries.
This means that the entries can have any form of values, including very big ones!
But typically, we’ll want our matrices initialized.
Commonly, we want a matrix of all zeros.


In [143]:
x = nd.zeros((3, 5))
x


Out[143]:
[[ 0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.]]
<NDArray 3x5 @cpu(0)>

Similarly, ndarray has a function to create a matrix of all ones.


In [144]:
x = nd.ones((3, 4))
x


Out[144]:
[[ 1.  1.  1.  1.]
 [ 1.  1.  1.  1.]
 [ 1.  1.  1.  1.]]
<NDArray 3x4 @cpu(0)>

Often, we’ll want to create arrays whose values are sampled randomly.
This is especially common when we intend to use the array as a parameter in a neural network.
In this snippet, we initialize with values drawn from a standard normal distribution with zero mean and unit variance.


In [145]:
y = nd.random_normal(0, 1, shape=(3,4))
y


Out[145]:
[[ 0.03629481 -0.49024421 -0.95017916  0.03751944]
 [-0.72984636 -2.04010558  1.482131    1.04082799]
 [-0.45256865  0.31160426 -0.83673781 -0.78830057]]
<NDArray 3x4 @cpu(0)>

Just like in NumPy, the dimensions of each NDArray are accessible with the .shape attribute:


In [146]:
y.shape


Out[146]:
(3, 4)

We can also query its size, which is equal to the product of the components of the shape.
Together witht the precision of the stored values, this tells us how much memory the array occupies.


In [147]:
y.size


Out[147]:
12

NDArray supports a large number of standard mathematical operations.


In [148]:
# Element-wise addition:
x + y


Out[148]:
[[ 1.03629482  0.50975579  0.04982084  1.03751945]
 [ 0.27015364 -1.04010558  2.482131    2.04082799]
 [ 0.54743135  1.31160426  0.16326219  0.21169943]]
<NDArray 3x4 @cpu(0)>

In [149]:
# Multiplication:
x * y


Out[149]:
[[ 0.03629481 -0.49024421 -0.95017916  0.03751944]
 [-0.72984636 -2.04010558  1.482131    1.04082799]
 [-0.45256865  0.31160426 -0.83673781 -0.78830057]]
<NDArray 3x4 @cpu(0)>

In [150]:
# Exponentiation:
nd.exp(y)


Out[150]:
[[ 1.03696156  0.61247683  0.38667175  1.03823221]
 [ 0.48198304  0.13001499  4.40231705  2.83156061]
 [ 0.63599241  1.36561418  0.43312114  0.45461673]]
<NDArray 3x4 @cpu(0)>

Here we can use a matrix's transpose to compute a proper matrix-matrix product:


In [151]:
nd.dot(x, y.T)


Out[151]:
[[-1.3666091  -0.24699283 -1.76600277]
 [-1.3666091  -0.24699283 -1.76600277]
 [-1.3666091  -0.24699283 -1.76600277]]
<NDArray 3x3 @cpu(0)>

We’ll explain these operations and present even more operators in the linear algebra chapter.
But for now, we’ll stick with the mechanics of working with NDArrays.

In the previous example, every time we ran an operation, we allocated new memory to host its results.
For example, if we write y = x + y, we will dereference the matrix that y used to point to and instead change the pointer to the newly allocated memory.
In the following example we demonstrate this with Python’s id() function, which gives us the exact address of the referenced object in memory.
After running y = y + x, we’ll find that id(y) points to a different location.
That’s because Python first evaluates y + x, allocating new memory for the result and then subsequently redirects y to point at this new location in memory.
Now read that last part again if you have any questions.


In [152]:
print('Memory location of y: ', id(y))
y = y + x
print('Memory location of y: ', id(y))


Memory location of y:  4631756528
Memory location of y:  4631726904

This might be undesirable for two reasons.
First, we don’t want to run around allocating memory unnecessarily all the time.
In machine learning, we might have hundreds of megabytes of parameters and update all of them multiple times per second.
Typically, we’ll want to perform these updates in place.
Second, we might point at the same parameters from multiple variables.
If we don’t update in place, this could cause a memory leak, and could cause us to inadvertently reference stale parameters.
Fortunately, performing in-place operations in MXNet is easy.
We can assign the result of an operation to a previously allocated array with slice notation, e.g., y[:] = <expression>.


In [153]:
print('Memory location of y: ', id(y))
y[:] = y + x
print('Memory location of y: ', id(y))


Memory location of y:  4631726904
Memory location of y:  4631726904

While this is syntactically nice, x + y here will still allocate a temporary buffer to store the result before copying it to y[:].
To make even better use of memory, we can directly invoke the underlying ndarray operation, in this case elemwise_add, avoiding temporary buffers.
We do this by specifying the out keyword argument, which every ndarray operator supports:


In [154]:
nd.elemwise_add(x, y, out=y)


Out[154]:
[[ 3.03629494  2.50975585  2.0498209   3.03751945]
 [ 2.27015352  0.95989442  4.482131    4.04082775]
 [ 2.54743147  3.31160426  2.16326213  2.21169949]]
<NDArray 3x4 @cpu(0)>

If we’re not planning to re-use x, then we can assign the result to x itself.
There are two ways to do this in MXNet.

  1. By using slice notation x[:] = x op y.
  2. By using the op-equals operators like +=.

In [155]:
print('Memory location of x: ', id(x))
x += y
x
print('Memory location of x: ', id(x))


Memory location of x:  4631755688
Memory location of x:  4631755688

MXNet NDArrays support slicing in all the ridiculous ways you might imagine accessing your data.
Here’s an example of reading the second and third rows from x:


In [156]:
x[1:3]


Out[156]:
[[ 3.27015352  1.95989442  5.482131    5.04082775]
 [ 3.54743147  4.3116045   3.16326213  3.21169949]]
<NDArray 2x4 @cpu(0)>

Now let's write to a specific element:


In [157]:
x[1, 2] = 9.0
x


Out[157]:
[[ 4.03629494  3.50975585  3.0498209   4.03751945]
 [ 3.27015352  1.95989442  9.          5.04082775]
 [ 3.54743147  4.3116045   3.16326213  3.21169949]]
<NDArray 3x4 @cpu(0)>

Multi-dimensional slicing is also supported:


In [158]:
x[1:2, 1:3]


Out[158]:
[[ 1.95989442  9.        ]]
<NDArray 1x2 @cpu(0)>

In [159]:
x[1:2, 1:3] = 5.0
x


Out[159]:
[[ 4.03629494  3.50975585  3.0498209   4.03751945]
 [ 3.27015352  5.          5.          5.04082775]
 [ 3.54743147  4.3116045   3.16326213  3.21169949]]
<NDArray 3x4 @cpu(0)>

You might wonder, what happens if you add a vector y to a matrix X?
These operations, where we compose a low dimensional array y with a high-dimensional array X invoke a functionality called broadcasting.
Here, the low-dimensional array is duplicated along any axis with dimension 1 to match the shape of the high dimensional array.


In [160]:
x = nd.ones(shape=(3, 3))
x


Out[160]:
[[ 1.  1.  1.]
 [ 1.  1.  1.]
 [ 1.  1.  1.]]
<NDArray 3x3 @cpu(0)>

In [161]:
y = nd.arange(3)
y


Out[161]:
[ 0.  1.  2.]
<NDArray 3 @cpu(0)>

In [162]:
x + y


Out[162]:
[[ 1.  2.  3.]
 [ 1.  2.  3.]
 [ 1.  2.  3.]]
<NDArray 3x3 @cpu(0)>

While y is initially of shape (3), MXNet infers its shape to be (1,3), and then broadcasts along the rows to form a (3,3) matrix).
You might wonder, why did MXNet choose to interpret y as a (1,3) matrix and not (3,1).
That’s because broadcasting prefers to duplicate along the left-most axis.
We can alter this behavior by explicitly giving y a 2D shape.


In [163]:
y = y.reshape((3, 1))
y


Out[163]:
[[ 0.]
 [ 1.]
 [ 2.]]
<NDArray 3x1 @cpu(0)>

In [164]:
x + y


Out[164]:
[[ 1.  1.  1.]
 [ 2.  2.  2.]
 [ 3.  3.  3.]]
<NDArray 3x3 @cpu(0)>

Converting MXNet NDArrays to and from NumPy is easy.
The converted arrays do not share memory.


In [165]:
print(x)
type(x)


[[ 1.  1.  1.]
 [ 1.  1.  1.]
 [ 1.  1.  1.]]
<NDArray 3x3 @cpu(0)>
Out[165]:
mxnet.ndarray.ndarray.NDArray

In [166]:
a = x.asnumpy()
print(a)
type(a)


[[ 1.  1.  1.]
 [ 1.  1.  1.]
 [ 1.  1.  1.]]
Out[166]:
numpy.ndarray

In [167]:
y = nd.array(a)
print(y)
type(y)


[[ 1.  1.  1.]
 [ 1.  1.  1.]
 [ 1.  1.  1.]]
<NDArray 3x3 @cpu(0)>
Out[167]:
mxnet.ndarray.ndarray.NDArray

You might have noticed that MXNet NDArray looks almost identical to NumPy.
But there are a few crucial differences.
One of the key features that differentiates MXNet from NumPy is its support for diverse hardware devices.
In MXNet, every array has a context.
One context could be the CPU.
Other contexts might be various GPUs.
Things can get even hairier when we deploy jobs across multiple servers.
By assigning arrays to contexts intelligently, we can minimize the time spent transferring data between devices.
For example, when training neural networks on a server with a GPU, we typically prefer that the model’s parameters live on the GPU.
To start, let’s try initializing an array on the first GPU.

# The GPU configuration has to be built from source. Not right now, thanks. z = nd.ones(shape=(3,3), ctx=mx.gpu(0)) z >>> [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]]

Given an NDArray on a given context, we can copy it to another context by using the copyto() method.

x_gpu = x.copyto(mx.gpu(0)) print(x_gpu) >>> [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]]

The result of an operator will have the same context as the inputs.

x_gpu + z >>> [[ 2. 2. 2.] [ 2. 2. 2.] [ 2. 2. 2.]]

If we ever want to check the context of an NDArray programmatically, we can just call its .context attribute.

print(x_gpu.context) print(z.context) >>> gpu(0) gpu(0)

In order to perform an operation on two ndarrays x1 and x2, we need them both to live on the same context.
And if they don’t already, we may need to explicitly copy data from one context to another.
You might think that’s annoying.
After all, we just demonstrated that MXNet knows where each NDArray lives.
So why can’t MXNet just automatically copy x1 to x2.context and then add them?

In short, people use MXNet to do machine learning because they expect it to be fast.
But transferring variables between different contexts is slow.
So we want you to be 100% certain that you want to do something slow before we let you do it.
If MXNet just did the copy automatically without crashing then you might not realize that you had written some slow code.
We don’t want you to spend your entire life on StackOverflow, so we make some mistakes impossible.

Watch Out!

Imagine that your variable z already lives on your second GPU (gpu(0)).
What happens if we call z.copyto(gpu(0))?
It will make a copy and allocate new memory, even though that variable already lives on the desired device!
There are times where depending on the environment our code is running in, two variables may already live on the same device.
So we only want to make a copy if the variables currently lives on different contexts.
In these cases, we can call as_in_context().
If the variable is already the specified context then this is a no-op.

print('id(z):', id(z)) z = z.copyto(mx.gpu(0)) print('id(z):', id(z)) z = z.as_in_context(mx.gpu(0)) print('id(z):', id(z)) print(z) >>> id(z): 140291459785224 id(z): 140291460485072 id(z): 140291460485072 [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]]

In [ ]: