Feature: Out-Of-Fold Predictions from a CNN (+Magic Inputs)

In addition to the convolutional architecture, we'll append some of the leaky features to the intermediate feature layer.

Imports

This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.


In [1]:
from pygoose import *

In [2]:
import gc

In [3]:
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import *

In [5]:
from keras import backend as K
from keras.models import Model, Sequential
from keras.layers import *
from keras.callbacks import EarlyStopping, ModelCheckpoint


Using TensorFlow backend.

Config

Automatically discover the paths to various data folders and compose the project structure.


In [6]:
project = kg.Project.discover()

Identifier for storing these features on disk and referring to them later.


In [7]:
feature_list_id = 'oofp_nn_cnn_with_magic'

Make subsequent NN runs reproducible.


In [8]:
RANDOM_SEED = 42

In [9]:
np.random.seed(RANDOM_SEED)

Read data

Word embedding lookup matrix.


In [10]:
embedding_matrix = kg.io.load(project.aux_dir + 'fasttext_vocab_embedding_matrix.pickle')

Padded sequences of word indices for every question.


In [11]:
X_train_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_train.pickle')
X_train_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_train.pickle')

In [12]:
X_test_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_test.pickle')
X_test_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_test.pickle')

In [13]:
y_train = kg.io.load(project.features_dir + 'y_train.pickle')

Magic features.


In [14]:
magic_feature_lists = [
    'magic_frequencies',
    'magic_cooccurrence_matrix',
]

In [15]:
X_train_magic, X_test_magic, _ = project.load_feature_lists(magic_feature_lists)

In [16]:
X_train_magic = X_train_magic.values
X_test_magic = X_test_magic.values

In [17]:
scaler = StandardScaler()
scaler.fit(np.vstack([X_train_magic, X_test_magic]))
X_train_magic = scaler.transform(X_train_magic)
X_test_magic = scaler.transform(X_test_magic)

Word embedding properties.


In [18]:
EMBEDDING_DIM = embedding_matrix.shape[-1]
VOCAB_LENGTH = embedding_matrix.shape[0]
MAX_SEQUENCE_LENGTH = X_train_q1.shape[-1]

In [19]:
print(EMBEDDING_DIM, VOCAB_LENGTH, MAX_SEQUENCE_LENGTH)


300 101564 30

Define models


In [20]:
init_weights = initializers.TruncatedNormal(mean=0.0, stddev=0.05, seed=2)
init_bias = 'zeros'

In [21]:
def create_embedding_block():
    input_seq = Input(shape=(MAX_SEQUENCE_LENGTH, ), dtype='int32')
    
    embedding_seq = Embedding(
        VOCAB_LENGTH,
        EMBEDDING_DIM,
        weights=[embedding_matrix],
        input_length=MAX_SEQUENCE_LENGTH,
        trainable=False,
    )(input_seq)
    
    output_seq = embedding_seq
    return input_seq, output_seq

In [22]:
def create_model_question_conv_branch(input_seq, params):
    conv_1 = Conv1D(
        params['num_conv_filters'],
        kernel_size=params['conv_kernel_size'],
        padding='same',
    )(input_seq)
    
    bn_1 = BatchNormalization()(conv_1)
    relu_1 = Activation('relu')(bn_1)
    dropout_1 = Dropout(params['conv_dropout_rate'])(relu_1)

    conv_2 = Conv1D(
        params['num_conv_filters'],
        kernel_size=params['conv_kernel_size'],
        padding='same',
    )(dropout_1)
    
    bn_2 = BatchNormalization()(conv_2)
    relu_2 = Activation('relu')(bn_2)
    dropout_2 = Dropout(params['conv_dropout_rate'])(relu_2)
    
    flatten = Flatten()(dropout_2)
    output = flatten
    
    return output

In [23]:
def create_model_question_timedist_max_branch(input_seq, params):
    timedist = TimeDistributed(Dense(EMBEDDING_DIM))(input_seq)
    bn = BatchNormalization()(timedist)
    relu = Activation('relu')(bn)
    dropout = Dropout(params['timedist_dropout_rate'])(relu)

    lambda_max = Lambda(
        lambda x: K.max(x, axis=1),
        output_shape=(EMBEDDING_DIM, )
    )(dropout)
    
    output = lambda_max
    return output

In [24]:
def create_dense_block(input_layer, num_units, dropout_rate):
    dense = Dense(
        num_units,
        kernel_initializer=init_weights,
        bias_initializer=init_bias,
    )(input_layer)
    bn = BatchNormalization()(dense)
    relu = Activation('relu')(bn)
    dropout = Dropout(dropout_rate)(relu)
    output = dropout
    
    return output

In [25]:
def create_model(params):
    input_q1, emb_q1 = create_embedding_block()
    input_q2, emb_q2 = create_embedding_block()
    
    # Feature extractors.
    conv_q1_output = create_model_question_conv_branch(emb_q1, params)
    conv_q2_output = create_model_question_conv_branch(emb_q2, params)
    
    timedist_q1_output = create_model_question_timedist_max_branch(emb_q1, params)
    timedist_q2_output = create_model_question_timedist_max_branch(emb_q2, params)
    
    # Mid-level transforms.
    conv_merged = concatenate([conv_q1_output, conv_q2_output])
    conv_dense_1 = create_dense_block(conv_merged, params['num_dense_1'], params['dense_dropout_rate'])
    conv_dense_2 = create_dense_block(conv_dense_1, params['num_dense_2'], params['dense_dropout_rate'])

    td_merged = concatenate([timedist_q1_output, timedist_q2_output])
    td_dense_1 = create_dense_block(td_merged, params['num_dense_1'], params['dense_dropout_rate'])
    td_dense_2 = create_dense_block(td_dense_1, params['num_dense_2'], params['dense_dropout_rate'])

    # Magic features.
    magic_input = Input(shape=(X_train_magic.shape[-1], ))
    
    # Main dense block.
    merged_main = concatenate([conv_dense_2, td_dense_2, magic_input])
    dense_main_1 = create_dense_block(merged_main, params['num_dense_1'], params['dense_dropout_rate'])
    dense_main_2 = create_dense_block(dense_main_1, params['num_dense_2'], params['dense_dropout_rate'])
    dense_main_3 = create_dense_block(dense_main_2, params['num_dense_3'], params['dense_dropout_rate'])
    
    output = Dense(
        1,
        kernel_initializer=init_weights,
        bias_initializer=init_bias,
        activation='sigmoid',
    )(dense_main_3)
    
    model = Model(
        inputs=[input_q1, input_q2, magic_input],
        outputs=output,
    )
    
    model.compile(
        loss='binary_crossentropy',
        optimizer='nadam',
        metrics=['accuracy']
    )

    return model

In [26]:
def predict(model, X_q1, X_q2, X_magic):
    """
    Mirror the pairs, compute two separate predictions, and average them.
    """
    
    y1 = model.predict([X_q1, X_q2, X_magic], batch_size=1024, verbose=1).reshape(-1)   
    y2 = model.predict([X_q2, X_q1, X_magic], batch_size=1024, verbose=1).reshape(-1)    
    return (y1 + y2) / 2

Partition the data


In [27]:
NUM_FOLDS = 5

In [28]:
kfold = StratifiedKFold(
    n_splits=NUM_FOLDS,
    shuffle=True,
    random_state=RANDOM_SEED
)

Create placeholders for out-of-fold predictions.


In [29]:
y_train_oofp = np.zeros_like(y_train, dtype='float64')

In [30]:
y_test_oofp = np.zeros((len(X_test_q1), NUM_FOLDS))

Define hyperparameters


In [31]:
BATCH_SIZE = 2048

In [32]:
MAX_EPOCHS = 200

In [33]:
model_params = {
    'num_conv_filters': 32,
    'num_dense_1': 256,
    'num_dense_2': 128,
    'num_dense_3': 100,
    'conv_kernel_size': 3,
    'conv_dropout_rate': 0.25,
    'timedist_dropout_rate': 0.25,
    'dense_dropout_rate': 0.25,
}

The path where the best weights of the current model will be saved.


In [34]:
model_checkpoint_path = project.temp_dir + 'fold-checkpoint-' + feature_list_id + '.h5'

Fit the folds and compute out-of-fold predictions


In [35]:
%%time

# Iterate through folds.
for fold_num, (ix_train, ix_val) in enumerate(kfold.split(X_train_q1, y_train)):
    
    # Augment the training set by mirroring the pairs.
    X_fold_train_q1 = np.vstack([X_train_q1[ix_train], X_train_q2[ix_train]])
    X_fold_train_q2 = np.vstack([X_train_q2[ix_train], X_train_q1[ix_train]])
    X_fold_train_magic = np.vstack([X_train_magic[ix_train], X_train_magic[ix_train]])

    X_fold_val_q1 = np.vstack([X_train_q1[ix_val], X_train_q2[ix_val]])
    X_fold_val_q2 = np.vstack([X_train_q2[ix_val], X_train_q1[ix_val]])
    X_fold_val_magic = np.vstack([X_train_magic[ix_val], X_train_magic[ix_val]])

    # Ground truth should also be "mirrored".
    y_fold_train = np.concatenate([y_train[ix_train], y_train[ix_train]])
    y_fold_val = np.concatenate([y_train[ix_val], y_train[ix_val]])
    
    print()
    print(f'Fitting fold {fold_num + 1} of {kfold.n_splits}')
    print()
    
    # Compile a new model.
    model = create_model(model_params)

    # Train.
    model.fit(
        [X_fold_train_q1, X_fold_train_q2, X_fold_train_magic], y_fold_train,
        validation_data=([X_fold_val_q1, X_fold_val_q2, X_fold_val_magic], y_fold_val),

        batch_size=BATCH_SIZE,
        epochs=MAX_EPOCHS,
        verbose=1,
        
        callbacks=[
            # Stop training when the validation loss stops improving.
            EarlyStopping(
                monitor='val_loss',
                min_delta=0.001,
                patience=3,
                verbose=1,
                mode='auto',
            ),
            # Save the weights of the best epoch.
            ModelCheckpoint(
                model_checkpoint_path,
                monitor='val_loss',
                save_best_only=True,
                verbose=2,
            ),
        ],
    )
        
    # Restore the best epoch.
    model.load_weights(model_checkpoint_path)
    
    # Compute out-of-fold predictions.
    y_train_oofp[ix_val] = predict(model, X_train_q1[ix_val], X_train_q2[ix_val], X_train_magic[ix_val])
    y_test_oofp[:, fold_num] = predict(model, X_test_q1, X_test_q2, X_test_magic)
    
    # Clear GPU memory.
    K.clear_session()
    del X_fold_train_q1
    del X_fold_train_q2
    del X_fold_val_q1
    del X_fold_val_q2
    del model
    gc.collect()


Fitting fold 1 of 5

Train on 646862 samples, validate on 161718 samples
Epoch 1/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.3594 - acc: 0.8420- ETA: 8s - loss: 0.3689 - acc: 0 - Epoch 00000: val_loss improved from inf to 0.41719, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 36s - loss: 0.3594 - acc: 0.8420 - val_loss: 0.4172 - val_acc: 0.7607
Epoch 2/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.3105 - acc: 0.8605Epoch 00001: val_loss improved from 0.41719 to 0.30802, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 34s - loss: 0.3104 - acc: 0.8605 - val_loss: 0.3080 - val_acc: 0.8545
Epoch 3/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2916 - acc: 0.8694- ETA: 3s - loss: 0.2 - ETA: 1s - loss: 0.2918Epoch 00002: val_loss improved from 0.30802 to 0.28898, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 34s - loss: 0.2916 - acc: 0.8694 - val_loss: 0.2890 - val_acc: 0.8694
Epoch 4/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2787 - acc: 0.8755Epoch 00003: val_loss improved from 0.28898 to 0.28317, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 34s - loss: 0.2787 - acc: 0.8755 - val_loss: 0.2832 - val_acc: 0.8730
Epoch 5/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2687 - acc: 0.8806Epoch 00004: val_loss improved from 0.28317 to 0.27955, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 34s - loss: 0.2687 - acc: 0.8806 - val_loss: 0.2795 - val_acc: 0.8753
Epoch 6/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2604 - acc: 0.8848Epoch 00005: val_loss improved from 0.27955 to 0.27764, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 34s - loss: 0.2604 - acc: 0.8848 - val_loss: 0.2776 - val_acc: 0.8750
Epoch 7/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2533 - acc: 0.8883Epoch 00006: val_loss improved from 0.27764 to 0.27308, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 34s - loss: 0.2533 - acc: 0.8883 - val_loss: 0.2731 - val_acc: 0.8780
Epoch 8/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2464 - acc: 0.8923Epoch 00007: val_loss improved from 0.27308 to 0.27079, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 34s - loss: 0.2464 - acc: 0.8923 - val_loss: 0.2708 - val_acc: 0.8788
Epoch 9/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2410 - acc: 0.8948Epoch 00008: val_loss did not improve
646862/646862 [==============================] - 34s - loss: 0.2410 - acc: 0.8948 - val_loss: 0.2744 - val_acc: 0.8766
Epoch 10/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2355 - acc: 0.8969Epoch 00009: val_loss did not improve
646862/646862 [==============================] - 34s - loss: 0.2355 - acc: 0.8968 - val_loss: 0.2711 - val_acc: 0.8791
Epoch 11/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2305 - acc: 0.8995Epoch 00010: val_loss did not improve
646862/646862 [==============================] - 34s - loss: 0.2305 - acc: 0.8995 - val_loss: 0.2744 - val_acc: 0.8780
Epoch 12/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2258 - acc: 0.9014Epoch 00011: val_loss did not improve
646862/646862 [==============================] - 34s - loss: 0.2258 - acc: 0.9013 - val_loss: 0.2712 - val_acc: 0.8811
Epoch 00011: early stopping
80859/80859 [==============================] - 1s     
2345796/2345796 [==============================] - 39s    
2344960/2345796 [============================>.] - ETA: 0s
Fitting fold 2 of 5

Train on 646862 samples, validate on 161718 samples
Epoch 1/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.3581 - acc: 0.8424Epoch 00000: val_loss improved from inf to 0.34933, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 36s - loss: 0.3580 - acc: 0.8425 - val_loss: 0.3493 - val_acc: 0.8389
Epoch 2/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.3105 - acc: 0.8603Epoch 00001: val_loss improved from 0.34933 to 0.30979, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 35s - loss: 0.3105 - acc: 0.8603 - val_loss: 0.3098 - val_acc: 0.8569
Epoch 3/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2905 - acc: 0.8699Epoch 00002: val_loss improved from 0.30979 to 0.29465, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 35s - loss: 0.2905 - acc: 0.8699 - val_loss: 0.2947 - val_acc: 0.8666
Epoch 4/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2772 - acc: 0.8769Epoch 00003: val_loss improved from 0.29465 to 0.28098, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 35s - loss: 0.2772 - acc: 0.8769 - val_loss: 0.2810 - val_acc: 0.8750
Epoch 5/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2670 - acc: 0.8820Epoch 00004: val_loss improved from 0.28098 to 0.28072, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646862/646862 [==============================] - 35s - loss: 0.2670 - acc: 0.8820 - val_loss: 0.2807 - val_acc: 0.8760
Epoch 6/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2580 - acc: 0.8861Epoch 00005: val_loss did not improve
646862/646862 [==============================] - 34s - loss: 0.2581 - acc: 0.8861 - val_loss: 0.2819 - val_acc: 0.8753
Epoch 7/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2512 - acc: 0.8894Epoch 00006: val_loss did not improve
646862/646862 [==============================] - 34s - loss: 0.2511 - acc: 0.8895 - val_loss: 0.2877 - val_acc: 0.8719
Epoch 8/200
645120/646862 [============================>.] - ETA: 0s - loss: 0.2453 - acc: 0.8923Epoch 00007: val_loss did not improve
646862/646862 [==============================] - 34s - loss: 0.2453 - acc: 0.8923 - val_loss: 0.2858 - val_acc: 0.8724
Epoch 00007: early stopping
2345796/2345796 [==============================] - 39s    
2344960/2345796 [============================>.] - ETA: 0s
Fitting fold 3 of 5

Train on 646864 samples, validate on 161716 samples
Epoch 1/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.3590 - acc: 0.8422Epoch 00000: val_loss improved from inf to 0.34252, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646864/646864 [==============================] - 36s - loss: 0.3589 - acc: 0.8422 - val_loss: 0.3425 - val_acc: 0.8415
Epoch 2/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.3109 - acc: 0.8605Epoch 00001: val_loss improved from 0.34252 to 0.29929, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646864/646864 [==============================] - 35s - loss: 0.3108 - acc: 0.8605 - val_loss: 0.2993 - val_acc: 0.8657
Epoch 3/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2918 - acc: 0.8698Epoch 00002: val_loss improved from 0.29929 to 0.28642, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646864/646864 [==============================] - 35s - loss: 0.2918 - acc: 0.8699 - val_loss: 0.2864 - val_acc: 0.8716
Epoch 4/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2787 - acc: 0.8763Epoch 00003: val_loss improved from 0.28642 to 0.28345, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646864/646864 [==============================] - 35s - loss: 0.2787 - acc: 0.8763 - val_loss: 0.2835 - val_acc: 0.8724
Epoch 5/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2691 - acc: 0.8813Epoch 00004: val_loss improved from 0.28345 to 0.28249, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646864/646864 [==============================] - 35s - loss: 0.2691 - acc: 0.8813 - val_loss: 0.2825 - val_acc: 0.8754
Epoch 6/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2606 - acc: 0.8852Epoch 00005: val_loss improved from 0.28249 to 0.27647, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646864/646864 [==============================] - 35s - loss: 0.2606 - acc: 0.8852 - val_loss: 0.2765 - val_acc: 0.8769
Epoch 7/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2531 - acc: 0.8889Epoch 00006: val_loss improved from 0.27647 to 0.27338, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646864/646864 [==============================] - 35s - loss: 0.2531 - acc: 0.8889 - val_loss: 0.2734 - val_acc: 0.8790
Epoch 8/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2464 - acc: 0.8920Epoch 00007: val_loss improved from 0.27338 to 0.26881, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646864/646864 [==============================] - 35s - loss: 0.2464 - acc: 0.8919 - val_loss: 0.2688 - val_acc: 0.8802
Epoch 9/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2410 - acc: 0.8947Epoch 00008: val_loss did not improve
646864/646864 [==============================] - 34s - loss: 0.2410 - acc: 0.8946 - val_loss: 0.2780 - val_acc: 0.8736
Epoch 10/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2352 - acc: 0.8973Epoch 00009: val_loss did not improve
646864/646864 [==============================] - 34s - loss: 0.2352 - acc: 0.8973 - val_loss: 0.2724 - val_acc: 0.8810
Epoch 11/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2305 - acc: 0.8998Epoch 00010: val_loss did not improve
646864/646864 [==============================] - 34s - loss: 0.2306 - acc: 0.8998 - val_loss: 0.2738 - val_acc: 0.8765
Epoch 12/200
645120/646864 [============================>.] - ETA: 0s - loss: 0.2258 - acc: 0.9018Epoch 00011: val_loss did not improve
646864/646864 [==============================] - 34s - loss: 0.2258 - acc: 0.9018 - val_loss: 0.2813 - val_acc: 0.8786
Epoch 00011: early stopping
80858/80858 [==============================] - 1s     
2344960/2345796 [============================>.] - ETA: 0s
Fitting fold 4 of 5

Train on 646866 samples, validate on 161714 samples
Epoch 1/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.3595 - acc: 0.8422Epoch 00000: val_loss improved from inf to 0.33843, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 36s - loss: 0.3594 - acc: 0.8423 - val_loss: 0.3384 - val_acc: 0.8495
Epoch 2/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.3118 - acc: 0.8597Epoch 00001: val_loss improved from 0.33843 to 0.30925, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.3118 - acc: 0.8597 - val_loss: 0.3093 - val_acc: 0.8599
Epoch 3/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2917 - acc: 0.8693Epoch 00002: val_loss improved from 0.30925 to 0.29044, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2917 - acc: 0.8693 - val_loss: 0.2904 - val_acc: 0.8682
Epoch 4/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2790 - acc: 0.8762Epoch 00003: val_loss improved from 0.29044 to 0.28157, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2791 - acc: 0.8761 - val_loss: 0.2816 - val_acc: 0.8746
Epoch 5/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2689 - acc: 0.8810Epoch 00004: val_loss improved from 0.28157 to 0.27513, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2690 - acc: 0.8810 - val_loss: 0.2751 - val_acc: 0.8777
Epoch 6/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2606 - acc: 0.8848Epoch 00005: val_loss improved from 0.27513 to 0.27183, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2605 - acc: 0.8848 - val_loss: 0.2718 - val_acc: 0.8791
Epoch 7/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2533 - acc: 0.8885Epoch 00006: val_loss improved from 0.27183 to 0.27021, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2532 - acc: 0.8885 - val_loss: 0.2702 - val_acc: 0.8800
Epoch 8/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2466 - acc: 0.8915Epoch 00007: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2466 - acc: 0.8915 - val_loss: 0.2715 - val_acc: 0.8788
Epoch 9/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2413 - acc: 0.8938Epoch 00008: val_loss improved from 0.27021 to 0.26384, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2413 - acc: 0.8939 - val_loss: 0.2638 - val_acc: 0.8830
Epoch 10/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2355 - acc: 0.8972Epoch 00009: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2355 - acc: 0.8972 - val_loss: 0.2657 - val_acc: 0.8819
Epoch 11/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2306 - acc: 0.8995Epoch 00010: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2306 - acc: 0.8995 - val_loss: 0.2658 - val_acc: 0.8829
Epoch 12/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2260 - acc: 0.9017Epoch 00011: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2260 - acc: 0.9017 - val_loss: 0.2663 - val_acc: 0.8840
Epoch 13/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2218 - acc: 0.9034Epoch 00012: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2218 - acc: 0.9034 - val_loss: 0.2716 - val_acc: 0.8832
Epoch 00012: early stopping
80857/80857 [==============================] - 1s     
2345796/2345796 [==============================] - 38s    
2342912/2345796 [============================>.] - ETA: 0s
Fitting fold 5 of 5

Train on 646866 samples, validate on 161714 samples
Epoch 1/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.3588 - acc: 0.8420Epoch 00000: val_loss improved from inf to 0.38813, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 36s - loss: 0.3587 - acc: 0.8421 - val_loss: 0.3881 - val_acc: 0.7911
Epoch 2/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.3109 - acc: 0.8605Epoch 00001: val_loss improved from 0.38813 to 0.30509, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.3109 - acc: 0.8605 - val_loss: 0.3051 - val_acc: 0.8650
Epoch 3/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2915 - acc: 0.8695Epoch 00002: val_loss improved from 0.30509 to 0.28836, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2915 - acc: 0.8695 - val_loss: 0.2884 - val_acc: 0.8711
Epoch 4/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2792 - acc: 0.8755Epoch 00003: val_loss improved from 0.28836 to 0.28503, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2792 - acc: 0.8755 - val_loss: 0.2850 - val_acc: 0.8725
Epoch 5/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2694 - acc: 0.8807Epoch 00004: val_loss improved from 0.28503 to 0.27480, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2694 - acc: 0.8807 - val_loss: 0.2748 - val_acc: 0.8785
Epoch 6/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2608 - acc: 0.8848Epoch 00005: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2608 - acc: 0.8848 - val_loss: 0.2778 - val_acc: 0.8751
Epoch 7/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2530 - acc: 0.8882Epoch 00006: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2531 - acc: 0.8882 - val_loss: 0.2778 - val_acc: 0.8756
Epoch 8/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2471 - acc: 0.8912Epoch 00007: val_loss improved from 0.27480 to 0.27173, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2471 - acc: 0.8912 - val_loss: 0.2717 - val_acc: 0.8783
Epoch 9/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2413 - acc: 0.8941Epoch 00008: val_loss improved from 0.27173 to 0.26839, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 35s - loss: 0.2413 - acc: 0.8941 - val_loss: 0.2684 - val_acc: 0.8811
Epoch 10/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2361 - acc: 0.8969Epoch 00009: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2362 - acc: 0.8968 - val_loss: 0.2718 - val_acc: 0.8774
Epoch 11/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2312 - acc: 0.8995Epoch 00010: val_loss improved from 0.26839 to 0.26481, saving model to /home/yuriyguts/Projects/kaggle-quora-question-pairs/data/tmp/fold-checkpoint-oofp_nn_cnn_with_magic.h5
646866/646866 [==============================] - 34s - loss: 0.2313 - acc: 0.8995 - val_loss: 0.2648 - val_acc: 0.8823
Epoch 12/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2268 - acc: 0.9011Epoch 00011: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2268 - acc: 0.9011 - val_loss: 0.2696 - val_acc: 0.8792
Epoch 13/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2226 - acc: 0.9028Epoch 00012: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2226 - acc: 0.9028 - val_loss: 0.2661 - val_acc: 0.8832
Epoch 14/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2181 - acc: 0.9055Epoch 00013: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2182 - acc: 0.9055 - val_loss: 0.2735 - val_acc: 0.8777
Epoch 15/200
645120/646866 [============================>.] - ETA: 0s - loss: 0.2154 - acc: 0.9067Epoch 00014: val_loss did not improve
646866/646866 [==============================] - 34s - loss: 0.2154 - acc: 0.9067 - val_loss: 0.2687 - val_acc: 0.8837
Epoch 00014: early stopping
2345796/2345796 [==============================] - 38s    
2342912/2345796 [============================>.] - ETA: 0sCPU times: user 24min 59s, sys: 3min 2s, total: 28min 2s
Wall time: 42min 8s

In [36]:
cv_score = log_loss(y_train, y_train_oofp)
print('CV score:', cv_score)


CV score: 0.264377776985

Save features


In [37]:
feature_names = [feature_list_id]

In [38]:
features_train = y_train_oofp.reshape((-1, 1))

In [39]:
features_test = np.mean(y_test_oofp, axis=1).reshape((-1, 1))

In [40]:
project.save_features(features_train, features_test, feature_names, feature_list_id)

Explore


In [41]:
pd.DataFrame(features_test).plot.hist()


Out[41]:
<matplotlib.axes._subplots.AxesSubplot at 0x7fe999025c50>