This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com) for PyCon 2014. Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_pycon2014/).

Part 1: Some Background

In this section we'll go through some preliminary topics and helpful background for the content in this tutorial.

By the end of this section you should:

  • Know what sort of tasks qualify as Machine Learning problems.
  • See some simple examples of machine learning
  • Know the basics of creating and manipulating numpy arrays.
  • Know the basics of scatter plots in matplotlib.

What is Machine Learning?

In this section we will begin to explore the basic principles of machine learning. Machine Learning is about building programs with tunable parameters (typically an array of floating point values) that are adjusted automatically so as to improve their behavior by adapting to previously seen data.

Machine Learning can be considered a subfield of Artificial Intelligence since those algorithms can be seen as building blocks to make computers learn to behave more intelligently by somehow generalizing rather that just storing and retrieving data items like a database system would do.

We'll take a look at two very simple machine learning tasks here. The first is a classification task: the figure shows a collection of two-dimensional data, colored according to two different class labels. A classification algorithm may be used to draw a dividing boundary between the two clusters of points:


In [1]:
# Start matplotlib inline mode, so figures will appear in the notebook
%matplotlib inline

In [2]:
# Import the example plot from the figures directory
from fig_code import plot_sgd_separator
plot_sgd_separator()


This may seem like a trivial task, but it is a simple version of a very important concept. By drawing this separating line, we have learned a model which can generalize to new data: if you were to drop another point onto the plane which is unlabeled, this algorithm could now predict whether it's a blue or a red point.

If you'd like to see the source code used to generate this, you can either open the code in the figures directory, or you can load the code using the %load magic command:


In [3]:
#Uncomment the %load command to load the contents of the file
# %load fig_code/sgd_separator.py

The next simple task we'll look at is a regression task: a simple best-fit line to a set of data:


In [4]:
from fig_code import plot_linear_regression
plot_linear_regression()


Again, this is an example of fitting a model to data, such that the model can make generalizations about new data. The model has been learned from the training data, and can be used to predict the result of test data: here, we might be given an x-value, and the model would allow us to predict the y value. Again, this might seem like a trivial problem, but it is a basic example of a type of operation that is fundamental to machine learning tasks.

Numpy

Manipulating numpy arrays is an important part of doing machine learning (or, really, any type of scientific computation) in python. This will likely be review for most: we'll quickly go through some of the most important features.


In [5]:
import numpy as np

# Generating a random array
X = np.random.random((3, 5))  # a 3 x 5 array

print X


[[ 0.68554084  0.99481945  0.42171778  0.2980198   0.788365  ]
 [ 0.91414655  0.26728696  0.59262316  0.05796566  0.73325553]
 [ 0.31650997  0.26602885  0.46244526  0.62143656  0.23170365]]

In [6]:
# Accessing elements

# get a single element
print X[0, 0]

# get a row
print X[1]

# get a column
print X[:, 1]


0.685540838679
[ 0.91414655  0.26728696  0.59262316  0.05796566  0.73325553]
[ 0.99481945  0.26728696  0.26602885]

In [7]:
# Transposing an array
print X.T


[[ 0.68554084  0.91414655  0.31650997]
 [ 0.99481945  0.26728696  0.26602885]
 [ 0.42171778  0.59262316  0.46244526]
 [ 0.2980198   0.05796566  0.62143656]
 [ 0.788365    0.73325553  0.23170365]]

In [8]:
# Turning a row vector into a column vector
y = np.linspace(0, 12, 5)
print y

# make into a column vector
print y[:, np.newaxis]


[  0.   3.   6.   9.  12.]
[[  0.]
 [  3.]
 [  6.]
 [  9.]
 [ 12.]]

There is much, much more to know, but these few operations are fundamental to what we'll do during this tutorial.

Scipy Sparse Matrices

We won't make very much use of these in this tutorial, but sparse matrices are very nice in some situations. For example, in some machine learning tasks, especially those associated with textual analysis, the data may be mostly zeros. Storing all these zeros is very inefficient. We can create and manipulate sparse matrices as follows:


In [9]:
# Create a random array with a lot of zeros
X = np.random.random((10, 5))
print X


[[ 0.94531988  0.24380585  0.28827154  0.81048153  0.75609912]
 [ 0.41189305  0.91508148  0.11583652  0.78611657  0.28910284]
 [ 0.57837054  0.88415398  0.69852079  0.03052359  0.69401941]
 [ 0.57988757  0.84273914  0.83461578  0.28921739  0.64017248]
 [ 0.76197598  0.50129526  0.94714542  0.45829698  0.37543524]
 [ 0.95940386  0.67520976  0.52125365  0.93214266  0.17329138]
 [ 0.32179395  0.55854669  0.66216461  0.70072992  0.60990773]
 [ 0.83447821  0.08467047  0.65538539  0.77651973  0.02608172]
 [ 0.53401371  0.18526802  0.53457061  0.58878805  0.79437564]
 [ 0.58140691  0.35953142  0.36719677  0.16835714  0.12890584]]

In [10]:
X[X < 0.7] = 0
print X


[[ 0.94531988  0.          0.          0.81048153  0.75609912]
 [ 0.          0.91508148  0.          0.78611657  0.        ]
 [ 0.          0.88415398  0.          0.          0.        ]
 [ 0.          0.84273914  0.83461578  0.          0.        ]
 [ 0.76197598  0.          0.94714542  0.          0.        ]
 [ 0.95940386  0.          0.          0.93214266  0.        ]
 [ 0.          0.          0.          0.70072992  0.        ]
 [ 0.83447821  0.          0.          0.77651973  0.        ]
 [ 0.          0.          0.          0.          0.79437564]
 [ 0.          0.          0.          0.          0.        ]]

In [11]:
from scipy import sparse

# turn X into a csr (Compressed-Sparse-Row) matrix
X_csr = sparse.csr_matrix(X)
print X_csr


  (0, 0)	0.945319879873
  (0, 3)	0.810481526417
  (0, 4)	0.756099123544
  (1, 1)	0.915081477351
  (1, 3)	0.786116574635
  (2, 1)	0.884153979327
  (3, 1)	0.842739141962
  (3, 2)	0.834615776369
  (4, 0)	0.761975977392
  (4, 2)	0.947145416297
  (5, 0)	0.959403863631
  (5, 3)	0.932142657865
  (6, 3)	0.700729919386
  (7, 0)	0.834478214188
  (7, 3)	0.776519731559
  (8, 4)	0.79437564315

In [12]:
# convert the sparse matrix to a dense array
print X_csr.toarray()


[[ 0.94531988  0.          0.          0.81048153  0.75609912]
 [ 0.          0.91508148  0.          0.78611657  0.        ]
 [ 0.          0.88415398  0.          0.          0.        ]
 [ 0.          0.84273914  0.83461578  0.          0.        ]
 [ 0.76197598  0.          0.94714542  0.          0.        ]
 [ 0.95940386  0.          0.          0.93214266  0.        ]
 [ 0.          0.          0.          0.70072992  0.        ]
 [ 0.83447821  0.          0.          0.77651973  0.        ]
 [ 0.          0.          0.          0.          0.79437564]
 [ 0.          0.          0.          0.          0.        ]]

In [13]:
# Sparse matrices support linear algebra:
y = np.random.random(X_csr.shape[1])
z1 = X_csr.dot(y)
z2 = X.dot(y)
np.allclose(z1, z2)


Out[13]:
True

The CSR representation can be very efficient for computations, but it is not as good for adding elements. For that, the LIL (List-In-List) representation is better:


In [14]:
# Create an empty LIL matrix and add some items
X_lil = sparse.lil_matrix((5, 5))

for i, j in np.random.randint(0, 5, (15, 2)):
    X_lil[i, j] = i + j

print X_lil
print X_lil.toarray()


  (0, 4)	4.0
  (1, 0)	1.0
  (1, 2)	3.0
  (1, 4)	5.0
  (2, 0)	2.0
  (2, 3)	5.0
  (3, 0)	3.0
  (3, 1)	4.0
  (3, 2)	5.0
  (3, 4)	7.0
  (4, 0)	4.0
  (4, 1)	5.0
  (4, 4)	8.0
[[ 0.  0.  0.  0.  4.]
 [ 1.  0.  3.  0.  5.]
 [ 2.  0.  0.  5.  0.]
 [ 3.  4.  5.  0.  7.]
 [ 4.  5.  0.  0.  8.]]

Often, once an LIL matrix is created, it is useful to convert it to a CSR format (many scikit-learn algorithms require CSR or CSC format)


In [15]:
X_csr = X_lil.tocsr()
print X_csr


  (0, 4)	4.0
  (1, 0)	1.0
  (1, 2)	3.0
  (1, 4)	5.0
  (2, 0)	2.0
  (2, 3)	5.0
  (3, 0)	3.0
  (3, 1)	4.0
  (3, 2)	5.0
  (3, 4)	7.0
  (4, 0)	4.0
  (4, 1)	5.0
  (4, 4)	8.0

There are several other sparse formats that can be useful for various problems:

  • CSC (compressed sparse column)
  • BSR (block sparse row)
  • COO (coordinate)
  • DIA (diagonal)
  • DOK (dictionary of keys)

The scipy.sparse submodule also has a lot of functions for sparse matrices including linear algebra, sparse solvers, graph algorithms, and much more.

Matplotlib

Another important part of machine learning is visualization of data. The most common tool for this in Python is matplotlib. It is an extremely flexible package, but we will go over some basics here.

First, something special to IPython notebook. We can turn on the "IPython inline" mode, which will make plots show up inline in the notebook.


In [16]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np

In [17]:
# plotting a line

x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x));



In [18]:
# scatter-plot points

x = np.random.normal(size=500)
y = np.random.normal(size=500)
plt.scatter(x, y);



In [19]:
# showing images
x = np.linspace(1, 12, 100)
y = x[:, np.newaxis]

im = y * np.sin(x) * np.cos(y)
print(im.shape)


(100, 100)

In [20]:
# imshow - note that origin is at the top-left!
plt.imshow(im);



In [21]:
# Contour plot - note that origin here is at the bottom-left!
plt.contour(im);