In [1]:
%run init.ipynb


Using TensorFlow backend.
matchzoo version 2.1.0

data loading ...
data loaded as `train_pack_raw` `dev_pack_raw` `test_pack_raw`
`ranking_task` initialized with metrics [normalized_discounted_cumulative_gain@3(0.0), normalized_discounted_cumulative_gain@5(0.0), mean_average_precision(0.0)]
loading embedding ...
embedding loaded as `glove_embedding`

In [19]:
preprocessor = mz.preprocessors.BasicPreprocessor(fixed_length_left=10, fixed_length_right=40, remove_stop_words=False)

In [20]:
train_pack_processed = preprocessor.fit_transform(train_pack_raw)
dev_pack_processed = preprocessor.transform(dev_pack_raw)
test_pack_processed = preprocessor.transform(test_pack_raw)


Processing text_left with chain_transform of Tokenize => Lowercase => PuncRemoval: 100%|██████████| 2118/2118 [00:00<00:00, 5660.56it/s]
Processing text_right with chain_transform of Tokenize => Lowercase => PuncRemoval: 100%|██████████| 18841/18841 [00:04<00:00, 4467.85it/s]
Processing text_right with append: 100%|██████████| 18841/18841 [00:00<00:00, 786301.58it/s]
Building FrequencyFilter from a datapack.: 100%|██████████| 18841/18841 [00:00<00:00, 122848.89it/s]
Processing text_right with transform: 100%|██████████| 18841/18841 [00:00<00:00, 139092.07it/s]
Processing text_left with extend: 100%|██████████| 2118/2118 [00:00<00:00, 454679.90it/s]
Processing text_right with extend: 100%|██████████| 18841/18841 [00:00<00:00, 669810.24it/s]
Building Vocabulary from a datapack.: 100%|██████████| 404432/404432 [00:00<00:00, 2422809.59it/s]
Processing text_left with chain_transform of Tokenize => Lowercase => PuncRemoval: 100%|██████████| 2118/2118 [00:00<00:00, 8241.89it/s]
Processing text_right with chain_transform of Tokenize => Lowercase => PuncRemoval: 100%|██████████| 18841/18841 [00:04<00:00, 4600.64it/s]
Processing text_right with transform: 100%|██████████| 18841/18841 [00:00<00:00, 118459.97it/s]
Processing text_left with transform: 100%|██████████| 2118/2118 [00:00<00:00, 188154.70it/s]
Processing text_right with transform: 100%|██████████| 18841/18841 [00:00<00:00, 41857.09it/s]
Processing length_left with len: 100%|██████████| 2118/2118 [00:00<00:00, 588586.49it/s]
Processing length_right with len: 100%|██████████| 18841/18841 [00:00<00:00, 725717.97it/s]
Processing text_left with transform: 100%|██████████| 2118/2118 [00:00<00:00, 114181.33it/s]
Processing text_right with transform: 100%|██████████| 18841/18841 [00:00<00:00, 89142.16it/s]
Processing text_left with chain_transform of Tokenize => Lowercase => PuncRemoval: 100%|██████████| 122/122 [00:00<00:00, 8251.31it/s]
Processing text_right with chain_transform of Tokenize => Lowercase => PuncRemoval: 100%|██████████| 1115/1115 [00:00<00:00, 4700.14it/s]
Processing text_right with transform: 100%|██████████| 1115/1115 [00:00<00:00, 119923.30it/s]
Processing text_left with transform: 100%|██████████| 122/122 [00:00<00:00, 97728.24it/s]
Processing text_right with transform: 100%|██████████| 1115/1115 [00:00<00:00, 124012.86it/s]
Processing length_left with len: 100%|██████████| 122/122 [00:00<00:00, 181842.60it/s]
Processing length_right with len: 100%|██████████| 1115/1115 [00:00<00:00, 544176.05it/s]
Processing text_left with transform: 100%|██████████| 122/122 [00:00<00:00, 89147.23it/s]
Processing text_right with transform: 100%|██████████| 1115/1115 [00:00<00:00, 89651.09it/s]
Processing text_left with chain_transform of Tokenize => Lowercase => PuncRemoval: 100%|██████████| 237/237 [00:00<00:00, 8348.80it/s]
Processing text_right with chain_transform of Tokenize => Lowercase => PuncRemoval: 100%|██████████| 2300/2300 [00:00<00:00, 4593.53it/s]
Processing text_right with transform: 100%|██████████| 2300/2300 [00:00<00:00, 131653.35it/s]
Processing text_left with transform: 100%|██████████| 237/237 [00:00<00:00, 165840.85it/s]
Processing text_right with transform: 100%|██████████| 2300/2300 [00:00<00:00, 132096.83it/s]
Processing length_left with len: 100%|██████████| 237/237 [00:00<00:00, 273842.99it/s]
Processing length_right with len: 100%|██████████| 2300/2300 [00:00<00:00, 639163.80it/s]
Processing text_left with transform: 100%|██████████| 237/237 [00:00<00:00, 89064.60it/s]
Processing text_right with transform: 100%|██████████| 2300/2300 [00:00<00:00, 86151.49it/s]

In [21]:
preprocessor.context


Out[21]:
{'filter_unit': <matchzoo.preprocessors.units.frequency_filter.FrequencyFilter at 0x7f30f43876a0>,
 'vocab_unit': <matchzoo.preprocessors.units.vocabulary.Vocabulary at 0x7f3176e6aeb8>,
 'vocab_size': 16674,
 'embedding_input_dim': 16674,
 'input_shapes': [(10,), (40,)]}

In [22]:
model = mz.contrib.models.MatchLSTM()
model.params.update(preprocessor.context)
model.params['task'] = ranking_task
model.params['embedding_output_dim'] = 100
model.params['embedding_trainable'] = True
model.params['fc_num_units'] = 100
model.params['lstm_num_units'] = 100
model.params['dropout_rate'] = 0.5
model.params['optimizer'] = 'adadelta'
model.guess_and_fill_missing_params()
model.build()
model.compile()
print(model.params)


model_class                   <class 'matchzoo.contrib.models.match_lstm.MatchLSTM'>
input_shapes                  [(10,), (40,)]
task                          Ranking Task
optimizer                     adadelta
with_embedding                True
embedding_input_dim           16674
embedding_output_dim          100
embedding_trainable           True
lstm_num_units                100
fc_num_units                  100
dropout_rate                  0.5

In [23]:
model.backend.summary()


__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
text_left (InputLayer)          (None, 10)           0                                            
__________________________________________________________________________________________________
text_right (InputLayer)         (None, 40)           0                                            
__________________________________________________________________________________________________
embedding (Embedding)           multiple             1667400     text_left[0][0]                  
                                                                 text_right[0][0]                 
__________________________________________________________________________________________________
lstm_left (LSTM)                (None, 10, 100)      80400       embedding[0][0]                  
__________________________________________________________________________________________________
lstm_right (LSTM)               (None, 40, 100)      80400       embedding[1][0]                  
__________________________________________________________________________________________________
lambda_10 (Lambda)              (None, 10, 100)      0           lstm_left[0][0]                  
                                                                 lstm_right[0][0]                 
__________________________________________________________________________________________________
concatenate_4 (Concatenate)     (None, 50, 100)      0           lambda_10[0][0]                  
                                                                 lstm_right[0][0]                 
__________________________________________________________________________________________________
lstm_merge (LSTM)               (None, 200)          240800      concatenate_4[0][0]              
__________________________________________________________________________________________________
dropout_4 (Dropout)             (None, 200)          0           lstm_merge[0][0]                 
__________________________________________________________________________________________________
dense_23 (Dense)                (None, 100)          20100       dropout_4[0][0]                  
__________________________________________________________________________________________________
dense_24 (Dense)                (None, 1)            101         dense_23[0][0]                   
==================================================================================================
Total params: 2,089,201
Trainable params: 2,089,201
Non-trainable params: 0
__________________________________________________________________________________________________

In [24]:
embedding_matrix = glove_embedding.build_matrix(preprocessor.context['vocab_unit'].state['term_index'])
model.load_embedding_matrix(embedding_matrix)

In [25]:
test_x, test_y = test_pack_processed.unpack()
evaluate = mz.callbacks.EvaluateAllMetrics(model, x=test_x, y=test_y, batch_size=len(test_x))

In [26]:
train_generator = mz.DataGenerator(
    train_pack_processed,
    mode='pair',
    num_dup=2,
    num_neg=1,
    batch_size=20
)
print('num batches:', len(train_generator))


num batches: 102

In [27]:
history = model.fit_generator(train_generator, epochs=10, callbacks=[evaluate], workers=4, use_multiprocessing=True)


Epoch 1/10
102/102 [==============================] - 13s 130ms/step - loss: 0.9022
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.56891574533972 - normalized_discounted_cumulative_gain@5(0.0): 0.6259075966908896 - mean_average_precision(0.0): 0.5895521163084454
Epoch 2/10
102/102 [==============================] - 11s 106ms/step - loss: 0.6955
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.5876851995953162 - normalized_discounted_cumulative_gain@5(0.0): 0.6407140437458756 - mean_average_precision(0.0): 0.5965985760516177
Epoch 3/10
102/102 [==============================] - 11s 105ms/step - loss: 0.6073
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.6151453530205596 - normalized_discounted_cumulative_gain@5(0.0): 0.6639169915844698 - mean_average_precision(0.0): 0.6198851976278136
Epoch 4/10
102/102 [==============================] - 11s 105ms/step - loss: 0.5805
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.6097028553948147 - normalized_discounted_cumulative_gain@5(0.0): 0.6654420380026644 - mean_average_precision(0.0): 0.6240460033736575
Epoch 5/10
102/102 [==============================] - 11s 108ms/step - loss: 0.5180
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.6133176603089614 - normalized_discounted_cumulative_gain@5(0.0): 0.6538666262678027 - mean_average_precision(0.0): 0.6188266626371615
Epoch 6/10
102/102 [==============================] - 11s 107ms/step - loss: 0.4860
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.6196634602764765 - normalized_discounted_cumulative_gain@5(0.0): 0.6765955662781967 - mean_average_precision(0.0): 0.6318559407749947
Epoch 7/10
102/102 [==============================] - 11s 107ms/step - loss: 0.4297
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.5911951516660675 - normalized_discounted_cumulative_gain@5(0.0): 0.6356282828122544 - mean_average_precision(0.0): 0.5974699334878332
Epoch 8/10
102/102 [==============================] - 11s 105ms/step - loss: 0.3946
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.6316524262499843 - normalized_discounted_cumulative_gain@5(0.0): 0.6774169076547488 - mean_average_precision(0.0): 0.6368686569077484
Epoch 9/10
102/102 [==============================] - 11s 106ms/step - loss: 0.3788
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.5969334279811508 - normalized_discounted_cumulative_gain@5(0.0): 0.6513736764474628 - mean_average_precision(0.0): 0.607785113937385
Epoch 10/10
102/102 [==============================] - 11s 106ms/step - loss: 0.3170
Validation: normalized_discounted_cumulative_gain@3(0.0): 0.6266464172038838 - normalized_discounted_cumulative_gain@5(0.0): 0.6722764129615637 - mean_average_precision(0.0): 0.6201359089808456

Use this function to update the README.md with a better set of parameters. Make sure you delete the correct section of the README.md before calling this function.


In [29]:
append_params_to_readme(model)