The goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will:
In [49]:
import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt
%matplotlib inline
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
In [2]:
products = pd.read_csv('amazon_baby_subset.csv')
In [3]:
with open('important_words.json') as important_words_file:
important_words = json.load(important_words_file)
print important_words[:3]
Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Refer to Module 3 assignment for more details.
In [4]:
products = products.fillna({'review':''}) # fill in N/A's in the review column
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
products.head(3)
Out[4]:
In [5]:
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
The SFrame products now contains one column for each of the 193 important_words.
In [6]:
products['perfect'][:3]
Out[6]:
In [9]:
products.head(2)
Out[9]:
In [10]:
with open('module-10-assignment-train-idx.json') as train_data_file:
train_data_idx = json.load(train_data_file)
with open('module-10-assignment-validation-idx.json') as validation_data_file:
validation_data_idx = json.load(validation_data_file)
print train_data_idx[:3]
print validation_data_idx[:3]
In [11]:
train_data = products.iloc[train_data_idx]
train_data.head(2)
Out[11]:
In [12]:
validation_data = products.iloc[validation_data_idx]
validation_data.head(2)
Out[12]:
In [13]:
print 'Training set : %d data points' % len(train_data)
print 'Validation set: %d data points' % len(validation_data)
Just like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
In [14]:
def get_numpy_data(dataframe, features, label):
dataframe['constant'] = 1
features = ['constant'] + features
features_frame = dataframe[features]
feature_matrix = features_frame.as_matrix()
label_sarray = dataframe[label]
label_array = label_sarray.as_matrix()
return(feature_matrix, label_array)
In [15]:
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
In [16]:
print feature_matrix_train.shape
print feature_matrix_valid.shape
Note that we convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data
function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-10-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
Quiz Question: In Module 3 assignment, there were 194 features (an intercept + one feature for each of the 193 important words). In this assignment, we will use stochastic gradient ascent to train the classifier using logistic regression. How does the changing the solver to stochastic gradient ascent affect the number of features?
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in Module 3 assignment to make probability predictions, since this part is not affected by using stochastic gradient ascent as a solver. Only the way in which the coefficients are learned is affected by using stochastic gradient ascent as a solver.
In [17]:
'''
feature_matrix: N * D(intercept term included)
coefficients: D * 1
predictions: N * 1
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
score = np.dot(feature_matrix, coefficients) # N * 1
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = 1.0/(1+np.exp(-score))
# return predictions
return predictions
Let us now work on making minor changes to how the derivative computation is performed for logistic regression.
Recall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows:
$$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$In Module 3 assignment, we wrote a function to compute the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts the following two parameters:
errors
vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$feature
vector containing $h_j(\mathbf{x}_i)$ for all $i$Complete the following code block:
In [45]:
"""
errors: N * 1
feature: N * 1
derivative: 1
"""
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(np.transpose(errors), feature)
# Return the derivative
return derivative
Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.
To verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
To track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood.
$$\ell\ell_A(\mathbf{w}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$Note that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.
In [29]:
def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# scores.shape (53072L, 1L)
# indicator.shape (53072L,)
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum( ( indicator.reshape(scores.shape)-1 )*scores - logexp )/len(feature_matrix)
return lp
Quiz Question: Recall from the lecture and the earlier assignment, the log likelihood (without the averaging term) is given by
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$How are the functions $\ell\ell(\mathbf{w})$ and $\ell\ell_A(\mathbf{w})$ related?
Recall from the lecture that the gradient for a single data point $\color{red}{\mathbf{x}_i}$ can be computed using the following formula:
$$ \frac{\partial\ell_{\color{red}{i}}(\mathbf{w})}{\partial w_j} = h_j(\color{red}{\mathbf{x}_i})\left(\mathbf{1}[y_\color{red}{i} = +1] - P(y_\color{red}{i} = +1 | \color{red}{\mathbf{x}_i}, \mathbf{w})\right) $$Computing the gradient for a single data point
Do we really need to re-write all our code to modify $\partial\ell(\mathbf{w})/\partial w_j$ to $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$?
Thankfully No!. Using NumPy, we access $\mathbf{x}_i$ in the training data using feature_matrix_train[i:i+1,:]
and $y_i$ in the training data using sentiment_train[i:i+1]
. We can compute $\partial\ell_{\color{red}{i}}(\mathbf{w})/\partial w_j$ by re-using all the code written in feature_derivative and predict_probability.
We compute $\partial\ell_{\color{red}{i}}(\mathbf{w})/\partial w_j$ using the following steps:
feature_matrix_train[i:i+1,:]
as the first parameter.sentiment_train[i:i+1]
.feature_matrix_train[i:i+1, j]
as one of the parameters. Let us follow these steps for j = 1
and i = 10
:
In [20]:
j = 1 # Feature number
i = 10 # Data point number
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)
indicator = (sentiment_train[i:i+1]==+1)
errors = indicator - predictions
gradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])
print "Gradient single data point: %s" % gradient_single_data_point
print " --> Should print 0.0"
Quiz Question: The code block above computed $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ for j = 1
and i = 10
. Is $\partial\ell_{\color{red}{i}}(\mathbf{w})/{\partial w_j}$ a scalar or a 194-dimensional vector?
scalar
Stochastic gradient estimates the ascent direction using 1 data point, while gradient uses $N$ data points to decide how to update the the parameters. In an optional video, we discussed the details of a simple change that allows us to use a mini-batch of $B \leq N$ data points to estimate the ascent direction. This simple approach is faster than regular gradient but less noisy than stochastic gradient that uses only 1 data point. Although we encorage you to watch the optional video on the topic to better understand why mini-batches help stochastic gradient, in this assignment, we will simply use this technique, since the approach is very simple and will improve your results.
Given a mini-batch (or a set of data points) $\mathbf{x}_{i}, \mathbf{x}_{i+1} \ldots \mathbf{x}_{i+B}$, the gradient function for this mini-batch of data points is given by: $$ \color{red}{\sum_{s = i}^{i+B}} \frac{\partial\ell_{s}}{\partial w_j} = \color{red}{\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right) $$
Computing the gradient for a "mini-batch" of data points
Using NumPy, we access the points $\mathbf{x}_i, \mathbf{x}_{i+1} \ldots \mathbf{x}_{i+B}$ in the training data using feature_matrix_train[i:i+B,:]
and $y_i$ in the training data using sentiment_train[i:i+B]
.
We can compute $\color{red}{\sum_{s = i}^{i+B}} \partial\ell_{s}/\partial w_j$ easily as follows:
In [21]:
j = 1 # Feature number
i = 10 # Data point start
B = 10 # Mini-batch size
coefficients = np.zeros(194) # A point w at which we are computing the gradient.
predictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)
indicator = (sentiment_train[i:i+B]==+1)
errors = indicator - predictions
gradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])
print "Gradient mini-batch data points: %s" % gradient_mini_batch
print " --> Should print 1.0"
Quiz Question: The code block above computed
$\color{red}{\sum_{s = i}^{i+B}}\partial\ell_{s}(\mathbf{w})/{\partial w_j}$
for j = 10
, i = 10
, and B = 10
. Is this a scalar or a 194-dimensional vector?
Quiz Question: For what value of B
is the term
$\color{red}{\sum_{s = 1}^{B}}\partial\ell_{s}(\mathbf{w})/\partial w_j$
the same as the full gradient
$\partial\ell(\mathbf{w})/{\partial w_j}$? Hint: consider the training set we are using now.
scalar
(47780L, 194L)
It is a common practice to normalize the gradient update rule by the batch size B:
$$ \frac{\partial\ell_{\color{red}{A}}(\mathbf{w})}{\partial w_j} \approx \color{red}{\frac{1}{B}} {\sum_{s = i}^{i + B}} h_j(\mathbf{x}_s)\left(\mathbf{1}[y_s = +1] - P(y_s = +1 | \mathbf{x}_s, \mathbf{w})\right) $$In other words, we update the coefficients using the average gradient over data points (instead of using a summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes of stochastic gradient ascent (including a batch size of all the data points), and study the effect of batch size on the algorithm as well as the choice of step size.
Now we are ready to implement our own logistic regression with stochastic gradient ascent. Complete the following function to fit a logistic regression model using gradient ascent:
In [43]:
from math import sqrt
def logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):
log_likelihood_all = []
# make sure it's a numpy array
coefficients = np.array(initial_coefficients)
# set seed=1 to produce consistent results
np.random.seed(seed=1)
# Shuffle the data before starting
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0 # index of current batch
# Do a linear scan over data
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]
### YOUR CODE HERE
predictions = predict_probability(feature_matrix[i:(i+batch_size),:], coefficients)
# Compute indicator value for (y_i = +1)
# Make sure to slice the i-th entry with [i:i+batch_size]
### YOUR CODE HERE
indicator = (sentiment[i:i+batch_size]==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# Compute the derivative for coefficients[j] and save it to derivative.
# Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]
### YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])
# Compute the product of the step size, the derivative, and
# the **normalization constant** (1./batch_size)
### YOUR CODE HERE
coefficients[j] += step_size * derivative * 1. / batch_size
# Checking whether log likelihood is increasing
# Print the log likelihood over the *current batch*
lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],
coefficients)
log_likelihood_all.append(lp)
if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \
or itr % 10000 == 0 or itr == max_iter-1:
data_size = len(feature_matrix)
print 'Iteration %*d: Average log likelihood (of data points [%0*d:%0*d]) = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, \
int(np.ceil(np.log10(data_size))), i, \
int(np.ceil(np.log10(data_size))), i+batch_size, lp)
# if we made a complete pass over data, shuffle and restart
i += batch_size
if i+batch_size > len(feature_matrix):
permutation = np.random.permutation(len(feature_matrix))
feature_matrix = feature_matrix[permutation,:]
sentiment = sentiment[permutation]
i = 0
# We return the list of log likelihoods for plotting purposes.
return coefficients, log_likelihood_all
Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.
In [46]:
sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])
sample_sentiment = np.array([+1, -1])
coefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3), step_size=1., batch_size=2, max_iter=2)
print '-------------------------------------------------------------------------------------'
print 'Coefficients learned :', coefficients
print 'Average log likelihood per-iteration :', log_likelihood
if np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\
and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):
# pass if elements match within 1e-3
print '-------------------------------------------------------------------------------------'
print 'Test passed!'
else:
print '-------------------------------------------------------------------------------------'
print 'Test failed'
For the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?
Quiz Question: For what value of batch size B
above is the stochastic gradient ascent function logistic_regression_SG act as a standard gradient ascent algorithm? Hint: consider the training set we are using now.
Instead of implementing batch gradient ascent separately, we save time by re-using the stochastic gradient ascent function we just wrote — to perform gradient ascent, it suffices to set batch_size
to the number of data points in the training data. Yes, we did answer above the quiz question for you, but that is an important point to remember in the future :)
Small Caveat. The batch gradient ascent implementation here is slightly different than the one in the earlier assignments, as we now normalize the gradient update rule.
We now run stochastic gradient ascent over the feature_matrix_train for 10 iterations using:
initial_coefficients = np.zeros(194)
step_size = 5e-1
batch_size = 1
max_iter = 10
In [47]:
coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,\
initial_coefficients=np.zeros(194),\
step_size=5e-1, batch_size=1, max_iter=10)
In [50]:
plt.plot(log_likelihood)
plt.show()
Quiz Question. When you set batch_size = 1
, as each iteration passes, how does the average log likelihood in the batch change?
Now run batch gradient ascent over the feature_matrix_train for 200 iterations using:
initial_coefficients = np.zeros(194)
step_size = 5e-1
batch_size = len(feature_matrix_train)
max_iter = 200
In [51]:
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,\
initial_coefficients=np.zeros(194),\
step_size=5e-1, batch_size=len(feature_matrix_train), max_iter=200)
In [52]:
plt.plot(log_likelihood_batch)
plt.show()
Quiz Question. When you set batch_size = len(feature_matrix_train)
, as each iteration passes, how does the average log likelihood in the batch change?
To make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows): $$ [\text{# of passes}] = \frac{[\text{# of data points touched so far}]}{[\text{size of dataset}]} $$
Quiz Question Suppose that we run stochastic gradient ascent with a batch size of 100. How many gradient updates are performed at the end of two passes over a dataset consisting of 50000 data points?
1000
With the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use
step_size=1e-1
batch_size=100
initial_coefficients
to all zeros.
In [53]:
step_size = 1e-1
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
In [54]:
plt.plot(log_likelihood_batch)
plt.show()
We provide you with a utility function to plot the average log likelihood as a function of the number of passes.
In [57]:
def make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):
plt.rcParams.update({'figure.figsize': (9,5)})
log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \
np.ones((smoothing_window,))/smoothing_window, mode='valid')
plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,
log_likelihood_all_ma, linewidth=4.0, label=label)
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
plt.xlabel('# of passes over data')
plt.ylabel('Average log likelihood per data point')
plt.legend(loc='lower right', prop={'size':14})
In [56]:
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
label='stochastic gradient, step_size=1e-1')
The plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window
. The smoothing is simply a moving average of log likelihood over the last smoothing_window
"iterations" of stochastic gradient ascent.
In [58]:
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic gradient, step_size=1e-1')
Checkpoint: The above plot should look smoother than the previous plot. Play around with smoothing_window
. As you increase it, you should see a smoother plot.
To compare convergence rates for stochastic gradient ascent with batch gradient ascent, we call make_plot()
multiple times in the same cell.
We are comparing:
step_size = 0.1
, batch_size=100
step_size = 0.5
, batch_size=len(feature_matrix_train)
Write code to run stochastic gradient ascent for 200 passes using:
step_size=1e-1
batch_size=100
initial_coefficients
to all zeros.
In [59]:
step_size = 1e-1
batch_size = 100
num_passes = 200
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
## YOUR CODE HERE
coefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=1e-1, batch_size=100, max_iter=num_iterations)
In [60]:
# YOUR CODE HERE
coefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,\
initial_coefficients=np.zeros(194),\
step_size=5e-1, batch_size=len(feature_matrix_train), max_iter=200)
We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30
.
In [61]:
make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,
smoothing_window=30, label='stochastic, step_size=1e-1')
make_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),
smoothing_window=1, label='batch, step_size=5e-1')
Quiz Question: In the figure above, how many passes does batch gradient ascent need to achieve a similar log likelihood as stochastic gradient ascent?
In previous sections, we chose step sizes for you. In practice, it helps to know how to choose good step sizes yourself.
To start, we explore a wide range of step sizes that are equally spaced in the log space. Run stochastic gradient ascent with step_size
set to 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, and 1e2. Use the following set of parameters:
initial_coefficients=np.zeros(194)
batch_size=100
max_iter
initialized so as to run 10 passes over the data.
In [62]:
batch_size = 100
num_passes = 10
num_iterations = num_passes * int(len(feature_matrix_train)/batch_size)
coefficients_sgd = {}
log_likelihood_sgd = {}
for step_size in np.logspace(-4, 2, num=7):
coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,\
initial_coefficients=np.zeros(194),\
step_size=step_size, batch_size=batch_size, max_iter=num_iterations)
For consistency, we again apply smoothing_window=30
.
In [63]:
for step_size in np.logspace(-4, 2, num=7):
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
Now, let us remove the step size step_size = 1e2
and plot the rest of the curves.
In [64]:
for step_size in np.logspace(-4, 2, num=7)[0:6]:
make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,
smoothing_window=30, label='step_size=%.1e'%step_size)
Quiz Question: Which of the following is the worst step size? Pick the step size that results in the lowest log likelihood in the end.
Quiz Question: Which of the following is the best step size? Pick the step size that results in the highest log likelihood in the end.
In [ ]: