In this lab session we will learn how to pre-process feature vectors using numpy. For this purpose, lets create 10 feature vectors that have 5 features. We use numpy.random for generating these examples.
In [1]:
import numpy
X = numpy.random.randn(10, 5)
Lets print this matrix X where each row is a feature vector.
In [2]:
print X
We can access the i-th row by X[i,:]. Likewise, the j-th column of X can be accessed by X[:,j]
In [3]:
print X[1,:]
In [4]:
print X[:,1]
Next, lets $\ell_1$ normalize each feature vector. For this purpose we must compute the sum of the absolute values in each feature vector and divide each element in the vector by the norm. $\ell_1$ norm is defined as follows:
$\ell_1 (\mathbf{x}) = \sum_i |x_i|$
Let us compue the $\ell_1$ norm of each feature vector in X. We can use the abs function that gives the absolute value of a number. This function operates on each element of an array as well, which is very convenient. sum function gives the sum, obivously!
In [6]:
for i in range(0, 10):
print i, numpy.sum(numpy.abs(X[i,:]))
Now lets compute $\ell_2$ norms instead. We need to compute the squares, add them and take the square root for this.
In [7]:
for i in range(0, 10):
print i, numpy.sqrt(numpy.sum(X[i,:] * X[i,:]))
If you wanted to $\ell_2$ normalize X then this can be done as follows.
In [8]:
for i in range(0,10):
norm = numpy.sqrt(numpy.sum(X[i,:] * X[i,:]))
X[i,:] = X[i,:] / norm
In [9]:
print X
Just to make sure that X is indeed $\ell_2$ normalized lets print the norms again.
In [10]:
for i in range(0,10):
print i, numpy.sqrt(numpy.sum(X[i,:] * X[i,:]))
OK! That looks fine. Now try to $\ell_1$ normalize X as well by yourself.
Let us assume that we further wish to scale each feature (dimension) to [0,1] range using (x - min) / (max - min) method (see the lecture notes for details). We need to find the min and max for each feature across all feature vectors. This turns out to be computing the min and max for each column in X. Guess what, numpy has min and max functions that return the min and max values of an array. How convenient...
In [11]:
print X[:,0]
print numpy.min(X[:,0])
print numpy.max(X[:,0])
Lets use these functions to perform the [0,1] scaling on X.
In [12]:
for j in range(0, 5):
minVal = numpy.min(X[:,j])
maxVal = numpy.max(X[:,j])
for i in range(0, 10):
X[i,j] = (X[i,j] - minVal) / (maxVal - minVal)
In [13]:
print X
OK! Everything is in [0,1] now. One thing to remember is that if min and max are the same then the division during the scaling will be illegal. If this is the case then it means all values of that feature are the same. So you can either set it to 0 or 1, as you wish as long as it is consistent. Of course, if a feature has the same value across all train instances then it is not a useful feature because it does not discriminate the different classes. So you can even remove that feature from your train data and be happy about it (one less feature to worry about).
Let us assume that we wanted to do Gaussain scaling (see lecture notes) on this X. Here, we would use (x - mean) / sd, where sd is the standard deviation of the feature values. Not very surprisingly numpy has numpy.mean and numpy.std functions that do exactly this. I guess at this point I can convince you why you should use python+numpy for data mining and machine learning.
In [14]:
for j in range(0, 5):
mean = numpy.mean(X[:,j])
sd = numpy.std(X[:,j])
for i in range(0, 10):
X[i,j] = (X[i,j] - mean) / sd
In [15]:
print X
In [ ]: