In [1]:
import keras
keras.__version__
Out[1]:
In [2]:
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
num_words=10000 表示我們只採用前 10000 個常出現的字詞,此外在 label 中 0 表示負評 1 表示正評。
In [3]:
max([max(sequence) for sequence in train_data])
Out[3]:
也可以透過字典檔,將資料組合回評論文字。
In [4]:
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
# show
decoded_review
Out[4]:
In [0]:
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
將結果標記進行正規化
In [0]:
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
In [0]:
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
將優化器設定為 rmsprop,損失函數使用 binary_crossentropy,將網路進行 Compile
In [0]:
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
In [0]:
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
開始訓練模型
In [10]:
history = model.fit(partial_x_train,
partial_y_train,
epochs=100,
batch_size=512,
validation_data=(x_val, y_val))
訓練的過程會把相關資訊存放在 history,透過事後分析訓練過程的資訊可以幫助我們優化參數。
In [11]:
history_dict = history.history
history_dict.keys()
Out[11]:
透過上面的方法可以取得訓練 History 包含的資訊,然後我們將資訊繪製成為圖表,如下:
In [12]:
#@title
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
In [13]:
plt.clf() # clear figure
acc_values = history_dict['accuracy']
val_acc_values = history_dict['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
由上面的數據可以看出來,以現在的網路架構,其實在第 3 次 Epoch 就已經獲得最佳的結果,之後的訓練已經造成過度擬合 (Over Fitting),因此在這個案例中將 Epoch 設定為 3 或 4 是取得最佳訓練模型的方法。