Manipulating numpy
arrays is an important part of doing machine learning
(or, really, any type of scientific computation) in python. This will likely
be a short review for most. In any case, let's quickly go through some of the most important features.
In [ ]:
import numpy as np
# Setting a random seed for reproducibility
rnd = np.random.RandomState(seed=123)
# Generating a random array
X = rnd.uniform(low=0.0, high=1.0, size=(3, 5)) # a 3 x 5 array
print(X)
(Note that NumPy arrays use 0-indexing just like other data structures in Python.)
In [ ]:
# Accessing elements
# get a single element
# (here: an element in the first row and column)
print(X[0, 0])
# get a row
# (here: 2nd row)
print(X[1])
# get a column
# (here: 2nd column)
print(X[:, 1])
In [ ]:
# Transposing an array
print(X.T)
In [ ]:
# Creating a row vector
# of evenly spaced numbers over a specified interval.
y = np.linspace(0, 12, 5)
print(y)
In [ ]:
# Turning the row vector into a column vector
print(y[:, np.newaxis])
In [ ]:
# Getting the shape or reshaping an array
# Generating a random array
rnd = np.random.RandomState(seed=123)
X = rnd.uniform(low=0.0, high=1.0, size=(3, 5)) # a 3 x 5 array
print(X.shape)
print(X.reshape(5, 3))
In [ ]:
# Indexing by an array of integers (fancy indexing)
indices = np.array([3, 1, 0])
print(indices)
X[:, indices]
There is much, much more to know, but these few operations are fundamental to what we'll do during this tutorial.
We won't make very much use of these in this tutorial, but sparse matrices are very nice in some situations. In some machine learning tasks, especially those associated with textual analysis, the data may be mostly zeros. Storing all these zeros is very inefficient, and representing in a way that only contains the "non-zero" values can be much more efficient. We can create and manipulate sparse matrices as follows:
In [ ]:
from scipy import sparse
# Create a random array with a lot of zeros
rnd = np.random.RandomState(seed=123)
X = rnd.uniform(low=0.0, high=1.0, size=(10, 5))
print(X)
In [ ]:
# set the majority of elements to zero
X[X < 0.7] = 0
print(X)
In [ ]:
# turn X into a CSR (Compressed-Sparse-Row) matrix
X_csr = sparse.csr_matrix(X)
print(X_csr)
In [ ]:
# Converting the sparse matrix to a dense array
print(X_csr.toarray())
(You may have stumbled upon an alternative method for converting sparse to dense representations: numpy.todense
; toarray
returns a NumPy array, whereas todense
returns a NumPy matrix. In this tutorial, we will be working with NumPy arrays, not matrices; the latter are not supported by scikit-learn.)
The CSR representation can be very efficient for computations, but it is not as good for adding elements. For that, the LIL (List-In-List) representation is better:
In [ ]:
# Create an empty LIL matrix and add some items
X_lil = sparse.lil_matrix((5, 5))
for i, j in np.random.randint(0, 5, (15, 2)):
X_lil[i, j] = i + j
print(X_lil)
print(type(X_lil))
In [ ]:
X_dense = X_lil.toarray()
print(X_dense)
print(type(X_dense))
Often, once an LIL matrix is created, it is useful to convert it to a CSR format (many scikit-learn algorithms require CSR or CSC format)
In [ ]:
X_csr = X_lil.tocsr()
print(X_csr)
print(type(X_csr))
The available sparse formats that can be useful for various problems are:
CSR
(compressed sparse row)CSC
(compressed sparse column)BSR
(block sparse row)COO
(coordinate)DIA
(diagonal)DOK
(dictionary of keys)LIL
(list in list)The scipy.sparse
submodule also has a lot of functions for sparse matrices
including linear algebra, sparse solvers, graph algorithms, and much more.
Another important part of machine learning is the visualization of data. The most common
tool for this in Python is matplotlib
. It is an extremely flexible package, and
we will go over some basics here.
Since we are using Jupyter notebooks, let us use one of IPython's convenient built-in "magic functions", the "matoplotlib inline" mode, which will draw the plots directly inside the notebook.
In [ ]:
%matplotlib inline
In [ ]:
import matplotlib.pyplot as plt
In [ ]:
# Plotting a line
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x));
In [ ]:
# Scatter-plot points
x = np.random.normal(size=500)
y = np.random.normal(size=500)
plt.scatter(x, y);
In [ ]:
# Showing images using imshow
# - note that origin is at the top-left by default!
x = np.linspace(1, 12, 100)
y = x[:, np.newaxis]
im = y * np.sin(x) * np.cos(y)
print(im.shape)
plt.imshow(im);
In [ ]:
# Contour plots
# - note that origin here is at the bottom-left by default!
plt.contour(im);
In [ ]:
# 3D plotting
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
xgrid, ygrid = np.meshgrid(x, y.ravel())
ax.plot_surface(xgrid, ygrid, im, cmap=plt.cm.viridis, cstride=2, rstride=2, linewidth=0);
There are many, many more plot types available. One useful way to explore these is by looking at the matplotlib gallery.
You can test these examples out easily in the notebook: simply copy the Source Code
link on each page, and put it in a notebook using the %load
magic.
For example:
In [ ]:
# %load http://matplotlib.org/mpl_examples/pylab_examples/ellipse_collection.py