/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:25: UserWarning: Update your `Conv1D` call to the Keras 2 API: `Conv1D(filters=128, kernel_size=3, activation="relu")`
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:25: UserWarning: Update your `Conv1D` call to the Keras 2 API: `Conv1D(filters=128, kernel_size=4, activation="relu")`
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:25: UserWarning: Update your `Conv1D` call to the Keras 2 API: `Conv1D(filters=128, kernel_size=5, activation="relu")`
/usr/local/lib/python3.5/dist-packages/ipykernel_launcher.py:29: UserWarning: The `Merge` layer is deprecated and will be removed after 08/2017. Use instead layers from `keras.layers.merge`, e.g. `add`, `concatenate`, etc.
model fitting - more complex convolutional neural network
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 1000) 0
__________________________________________________________________________________________________
embedding_2 (Embedding) (None, 1000, 100) 607800 input_2[0][0]
__________________________________________________________________________________________________
conv1d_4 (Conv1D) (None, 998, 128) 38528 embedding_2[0][0]
__________________________________________________________________________________________________
conv1d_5 (Conv1D) (None, 997, 128) 51328 embedding_2[0][0]
__________________________________________________________________________________________________
conv1d_6 (Conv1D) (None, 996, 128) 64128 embedding_2[0][0]
__________________________________________________________________________________________________
max_pooling1d_4 (MaxPooling1D) (None, 199, 128) 0 conv1d_4[0][0]
__________________________________________________________________________________________________
max_pooling1d_5 (MaxPooling1D) (None, 199, 128) 0 conv1d_5[0][0]
__________________________________________________________________________________________________
max_pooling1d_6 (MaxPooling1D) (None, 199, 128) 0 conv1d_6[0][0]
__________________________________________________________________________________________________
merge_1 (Merge) (None, 597, 128) 0 max_pooling1d_4[0][0]
max_pooling1d_5[0][0]
max_pooling1d_6[0][0]
__________________________________________________________________________________________________
conv1d_7 (Conv1D) (None, 593, 128) 82048 merge_1[0][0]
__________________________________________________________________________________________________
max_pooling1d_7 (MaxPooling1D) (None, 118, 128) 0 conv1d_7[0][0]
__________________________________________________________________________________________________
conv1d_8 (Conv1D) (None, 114, 128) 82048 max_pooling1d_7[0][0]
__________________________________________________________________________________________________
max_pooling1d_8 (MaxPooling1D) (None, 3, 128) 0 conv1d_8[0][0]
__________________________________________________________________________________________________
flatten_2 (Flatten) (None, 384) 0 max_pooling1d_8[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 128) 49280 flatten_2[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 4) 516 dense_3[0][0]
==================================================================================================
Total params: 975,676
Trainable params: 975,676
Non-trainable params: 0
__________________________________________________________________________________________________
Train on 1160 samples, validate on 289 samples
Epoch 1/20
1150/1160 [============================>.] - ETA: 0s - loss: 1.0777 - acc: 0.4496Epoch 00001: val_acc improved from -inf to 0.63322, saving model to weights.001-0.6332.hdf5
1160/1160 [==============================] - 76s 65ms/step - loss: 1.0758 - acc: 0.4509 - val_loss: 0.8386 - val_acc: 0.6332
Epoch 2/20
1150/1160 [============================>.] - ETA: 0s - loss: 0.6540 - acc: 0.7513Epoch 00002: val_acc improved from 0.63322 to 0.80277, saving model to weights.002-0.8028.hdf5
1160/1160 [==============================] - 77s 66ms/step - loss: 0.6538 - acc: 0.7500 - val_loss: 0.5136 - val_acc: 0.8028
Epoch 3/20
1150/1160 [============================>.] - ETA: 0s - loss: 0.5013 - acc: 0.8096Epoch 00003: val_acc improved from 0.80277 to 0.80969, saving model to weights.003-0.8097.hdf5
1160/1160 [==============================] - 84s 72ms/step - loss: 0.5019 - acc: 0.8086 - val_loss: 0.4972 - val_acc: 0.8097
Epoch 4/20
1150/1160 [============================>.] - ETA: 0s - loss: 0.3532 - acc: 0.8748Epoch 00004: val_acc did not improve
1160/1160 [==============================] - 79s 68ms/step - loss: 0.3522 - acc: 0.8759 - val_loss: 0.5626 - val_acc: 0.7889
Epoch 5/20
1150/1160 [============================>.] - ETA: 0s - loss: 0.3076 - acc: 0.8974Epoch 00005: val_acc improved from 0.80969 to 0.83045, saving model to weights.005-0.8304.hdf5
1160/1160 [==============================] - 76s 66ms/step - loss: 0.3054 - acc: 0.8983 - val_loss: 0.4788 - val_acc: 0.8304
Epoch 6/20
1150/1160 [============================>.] - ETA: 0s - loss: 0.2362 - acc: 0.9148Epoch 00006: val_acc improved from 0.83045 to 0.83737, saving model to weights.006-0.8374.hdf5
1160/1160 [==============================] - 77s 66ms/step - loss: 0.2419 - acc: 0.9121 - val_loss: 0.4716 - val_acc: 0.8374
Epoch 7/20
1150/1160 [============================>.] - ETA: 0s - loss: 0.2083 - acc: 0.9322Epoch 00007: val_acc did not improve
1160/1160 [==============================] - 77s 66ms/step - loss: 0.2070 - acc: 0.9328 - val_loss: 0.5093 - val_acc: 0.8270
Epoch 8/20
1150/1160 [============================>.] - ETA: 0s - loss: 0.1352 - acc: 0.9670Epoch 00008: val_acc did not improve
1160/1160 [==============================] - 81s 70ms/step - loss: 0.1345 - acc: 0.9672 - val_loss: 0.5098 - val_acc: 0.8304
Epoch 00008: early stopping
289/289 [==============================] - 5s 18ms/step
Test score: 0.509825248722
Test accuracy: 0.830449808221
dict_keys(['loss', 'val_acc', 'val_loss', 'acc'])