Class 2: Introduction to TensorFlow.
Neural networks were one of the first machine learning models. Their popularity has fallen twice and is now on its third rise. Deep learning implies the use of neural networks. The "deep" in deep learning refers to a neural network with many hidden layers. Because neural networks have been around for so long, they have quite a bit of baggage. Many different training algorithms, activation/transfer functions, and structures have been added over the years. This course is only concerned with the latest, most current state of the art techniques for deep neural networks. I am not going to spend any time discussing the history of neural networks. If you would like to learn about some of the more classic structures of neural networks, there are several chapters dedicated to this in your course book. For the latest technology, I wrote an article for the Society of Actuaries on deep learning as the third generation of neural networks.
Neural networks accept input and produce output. The input to a neural network is called the feature vector. The size of this vector is always a fixed length. Changing the size of the feature vector means recreating the entire neural network. Though the feature vector is called a "vector," this is not always the case. A vector implies a 1D array. Historically the input to a neural network was always 1D. However, with modern neural networks you might see inputs, such as:
Prior to CNN's, the image input was sent to a neural network simply by squashing the image matrix into a long array by placing the image's rows side-by-side. CNNs are different, as the nD matrix literally passes through the neural network layers.
Initially this course will focus upon 1D input to neural networks. However, later sessions will focus more heavily upon higher dimension input.
Dimensions The term dimension can be confusing in neural networks. In the sense of a 1D input vector, dimension refers to how many elements are in that 1D array. For example a neural network with 10 input neurons has 10 dimensions. However, now that we have CNN's, the input has dimensions too. The input to the neural network will usually have 1, 2 or 3 dimensions. 4 or more dimensions is unusual. You might have a 2D input to a neural network that has 64x64 pixels. This would result in 4,096 input neurons. This network is either 2D or 4,096D, depending on which set of dimensions you are talking about!
Like many models, neural networks can function in classification or regression:
The following shows a classification and regression neural network:
Notice that the output of the regression neural network is numeric and the output of the classification is a class. Regression, or two-class classification, networks always have a single output. Classification neural networks have an output neuron for each class.
The following diagram shows a typical neural network:
There are usually four types of neurons in a neural network:
These neurons are grouped into layers:
The output from a single neuron is calculated according to the following formula:
$ f(x,\theta) = \phi(\sum_i(\theta_i \cdot x_i)) $
The input vector (x) represents the feature vector and the vector $\theta$ represents the weights. To account for the bias neuron, a value of 1 is always appended to the end of the input feature vector. This causes the last weight to be interpreted as a bias value that is simply added to the summation. The $\phi$ is the transfer/activation function.
Consider using the above equation to calculate the output from the following neuron:
The above neuron has 2 inputs plus the bias as a third. This neuron might accept the following input feature vector:
[1,2]
To account for the bias neuron, a 1 is appended, as follows:
[1,2,1]
The weights for a 3-input layer (2 real inputs + bias) will always have an additional weight, for the bias. A weight vector might be:
[ 0.1, 0.2, 0.3]
To calculate the summation, perform the following:
0.1*1 + 0.2*2 + 0.3*3 = 1.4
The value of 1.4 is passed to the $\theta$ function, which represents the activation function.
Activation functions, also known as transfer functions, are used to calculate the output of each layer of a neural network. Historically neural networks have used a hyperbolic tangent, sigmoid/logistic, or linear activation function. However, modern deep neural networks primarily make use of the following activation functions:
The ReLU function is calculated as follows:
$ \phi(x) = \max(0, x) $
The Softmax is calculated as follows:
$ \phi_i(z) = \frac{e^{z_i}}{\sum\limits_{j \in group}e^{z_j}} $
The Softmax activation function is only useful with more than one output neuron. It ensures that all of the output neurons sum to 1.0. This makes it very useful for classification where it shows the probability of each of the classes as being the correct choice.
To experiment with the Softmax, click here.
The linear activation function is essentially no activation function:
$ \phi(x) = x $
For regression problems, this is the activation function of choice.
Why is the ReLU activation function so popular? It was one of the key improvements to neural networks that makes deep learning work. Prior to deep learning, the sigmoid activation function was very common:
$ \phi(x) = \frac{1}{1 + e^{-x}} $
The graph of the sigmoid function is shown here:
Neural networks are often trained using gradient descent. To make use of gradient descent, it is necessary to take the derivative of the activation function. This allows the partial derivatives of each of the weights to be calculated with respect to the error function. A derivative is the instantaneous rate of change:
The derivative of the sigmoid function is given here:
$ \phi'(x)=\phi(x)(1-\phi(x)) $
This derivative is often given in other forms. The above form is used for computational efficiency. To see how this derivative was taken, see this.
The graph of the sigmoid derivative is given here:
The derivative quickly saturates to zero as x moves from zero. This is not a problem for the derivative of the ReLU, which is given here:
$ \phi'(x) = \begin{cases} 1 & x > 0 \\ 0 & x \leq 0 \end{cases} $
The activation functions seen in the previous section specifies the output of a single neuron. Together, the weight and bias of a neuron shape the output of the activation to produce the desired output. To see how this process occurs, consider the following equation. It represents a single-input sigmoid activation neural network.
$ f(x,w,b) = \frac{1}{1 + e^{-(wx+b)}} $
The x variable represents the single input to the neural network. The w and b variables specify the weight and bias of the neural network. The above equation is a combination of the weighted sum of the inputs and the sigmoid activation function. For this section, we will consider the sigmoid function because it clearly demonstrates the effect that a bias neuron has.
The weights of the neuron allow you to adjust the slope or shape of the activation function. The following figure shows the effect on the output of the sigmoid activation function if the weight is varied:
The above diagram shows several sigmoid curves using the following parameters:
f(x,0.5,0.0)
f(x,1.0,0.0)
f(x,1.5,0.0)
f(x,2.0,0.0)
To produce the curves, we did not use bias, which is evident in the third parameter of 0 in each case. Using four weight values yields four different sigmoid curves in the above figure. No matter the weight, we always get the same value of 0.5 when x is 0 because all of the curves hit the same point when x is 0. We might need the neural network to produce other values when the input is near 0.5.
Bias does shift the sigmoid curve, which allows values other than 0.5 when x is near 0. The following figure shows the effect of using a weight of 1.0 with several different biases:
The above diagram shows several sigmoid curves with the following parameters:
f(x,1.0,1.0)
f(x,1.0,0.5)
f(x,1.0,1.5)
f(x,1.0,2.0)
We used a weight of 1.0 for these curves in all cases. When we utilized several different biases, sigmoid curves shifted to the left or right. Because all the curves merge together at the top right or bottom left, it is not a complete shift.
When we put bias and weights together, they produced a curve that created the necessary output from a neuron. The above curves are the output from only one neuron. In a complete network, the output from many different neurons will combine to produce complex output patterns.
In [1]:
import tensorflow as tf
print("Tensor Flow Version: {}".format(tf.__version__))
TensorFlow is not the only only game in town. These are some of the best supported alternatives. Most of these are written in C++. In order of my own preference (I have used all of these):
Torch is used by Google DeepMind, the Facebook AI Research Group, IBM, Yandex and the Idiap Research Institute. It has been used for some of the most advanced deep learning projects in the world. However, it requires the LUA) programming language. It is very advanced, but it is not mainstream. I have not worked with Torch (yet!).
TensorFlow is a low-level mathematics API, similar to Numpy. However, unlike Numpy, TensorFlow is built for deep learning. TensorFlow works by allowing you to define compute graphs with Python. In this regard, it is similar to Spark. TensorFlow compiles these compute graphs into highly efficient C++/CUDA code.
The TensorBoard command line utility can be used to view these graphs. The iris neural network's graph used in this class is shown here:
Expanding the DNN gives:
Skflow is a layer on top of Tensorflow that makes it much easier to create neural networks. Rather than define the graphs, like you see above, you define the individual layers of the network with a much more high level API. Unless you are performing research into entirely new structures of deep neural networks it is unlikely that you need to program TensorFlow directly.
For this class, we will use SKFLOW, rather than direct TensorFlow
SKFLOW is built into TensorFlow, as of v0.8. This makes it very easy to use.
All examples in this class will use SKFLOW, and you are encouraged to use it for the programming assignments.
The following functions will be used in conjunction with TensorFlow to help preprocess the data. It is okay to just use them. For better understanding, try to understand how they work.
These functions allow you to build the feature vector for a neural network. Consider the following:
In [2]:
import numpy as np
import pandas as pd
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df,name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name,x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df,name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df,name,mean=None,sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name]-mean)/sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df,target):
result = []
for x in df.columns:
if x != target:
result.append(x)
return df.as_matrix(result).astype(np.float32),df[target].astype(np.int32)
This is a very simple example of how to perform the Iris classification using TensorFlow. The iris.csv file is used, rather than using the built-in files that many of the Google examples require.
Make sure that you always run previous code blocks. If you run the code block below, without the codeblock above, you will get errors
In [3]:
import tensorflow.contrib.learn as skflow
from sklearn import metrics
import pandas as pd
import os
from sklearn import preprocessing
path = "./data/"
# Read iris dataset
filename_read = os.path.join(path,"iris.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# Extract just the columns we shall predict on
encode_numeric_zscore(df,'sepal_l')
encode_numeric_zscore(df,'sepal_w')
encode_numeric_zscore(df,'petal_l')
encode_numeric_zscore(df,'petal_w')
species = encode_text_index(df,'species')
# Create x(predictors) and y (expected outcome)
x,y = to_xy(df,'species')
# Create a deep neural network with 3 hidden layers of 10, 20, 10
classifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=3,steps=200)
# Fit/train neural network
classifier.fit(x, y)
# Measure accuracy
score = metrics.accuracy_score(y, classifier.predict(x))
print("Final score: {}".format(score))
# How to make many predictions
pred = classifier.predict(x)
predDF = pd.DataFrame(pred)
pred_nameDF = pd.DataFrame(species[pred])
actual_nameDF = pd.DataFrame(species[df['species']])
df2 = pd.concat([df,predDF,pred_nameDF,actual_nameDF],axis=1)
df2.columns = ['sepal_l','sepal_w','petal_l','petal_w','expected','predicted','expected_str','predicted_str']
df2
Out[3]:
In [4]:
import numpy as np
# How to make predictions one at a time, in a loop
for fv in x:
# Flip the feature vector to the right shape
fv2 = fv.reshape((1,4))
# Compute a prediction for the 3 classes, this will return 0, 1 or 2.
pred = classifier.predict(fv2)
# Turn the numeric prediction to a text string (e.g. Iris-virginica)
pred_name = species[pred][0]
# Output result
print("{} : {} ({})".format(fv,pred,pred_name))
In [5]:
# ad hoc prediction
sample_flower = np.ndarray(buffer=np.array([5.0,3.0,4.0,2.0]),shape=(1,4))
pred = classifier.predict(sample_flower)
print("Predict that {} is: {}".format(sample_flower,species[pred]))
This example shows how to encode the MPG dataset for regression. This is slightly more complex than Iris, because:
To encode categorical values that are part of the feature vector, use the functions from above. If the categorical value is the target (as was the case with Iris, use the same technique as Iris). The iris technique allows you to decode back to Iris text strings from the predictions.
In [6]:
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Display training data
df
# Encode to a 2D matrix for training
x,y = to_xy(df,['mpg'])
# Create a deep neural network with 3 hidden layers of 50, 25, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)
# Fit/train neural network
regressor.fit(x, y)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(regressor.predict(x),y))
print("Final score (RMSE): {}".format(score))
# How to make many predictions
pred = regressor.predict(x)
predDF = pd.DataFrame(pred)
df2 = pd.concat([df,predDF,pd.DataFrame(y)],axis=1)
df2.columns = list(df.columns)+['pred','ideal']
df2
Out[6]:
The code below saves a TensorFlow network to a directory. This directory can be used to reload your weights, without retraining. Because training can take a long time, this is critical.
The following code trains an Iris dataset, reports the error, saves/reloads it, and reports the same error.
In [7]:
import tensorflow.contrib.learn as skflow
from sklearn import metrics
import pandas as pd
import os
from sklearn import preprocessing
path = "./data/"
# Read iris dataset
filename_read = os.path.join(path,"iris.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# Encode
encode_text_index(df,'species')
# Create the x-side (feature vectors) of the training
x, y = to_xy(df,'species')
# Create a deep neural network with 3 hidden layers of 10, 20, 10
classifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=3,steps=200)
# Fit/train neural network
classifier.fit(x, y)
# Measure accuracy
score = metrics.accuracy_score(y, classifier.predict(x))
print("Final score: {}".format(score))
# Save the neural network to a directory
classifier.save("./iris-network")
classifier = None # Kill it
# Reload it
classifier2 = skflow.TensorFlowEstimator.restore("./iris-network")
# Prove that the reloaded is the same as the original
score = metrics.accuracy_score(y, classifier2.predict(x))
print("Saved final score: {}".format(score))
TensorFlow includes the command line utility called tensorboard that can be used to visualize the neural networks. It is not needed for this course, but it can be handy to see your neural network, and I will use it in lecture a few times. It does not work with IBM Data Scientist Workbench, so you will need a native install if you would like to use it.
To make use of it, you must specify a logdir on the fit command, for example:
classifier.fit(x, y, logdir='./log/')
Once the fit occurs, the logdir will be filled with files that tensorboard will use. To view the graph, issue the following command:
tensorboard --logdir ./log
In [ ]: