Train Toxicity Model

This notebook trains a model to detect toxicity in online comments. It uses a CNN architecture for text classification trained on the Wikipedia Talk Labels: Toxicity dataset and pre-trained GloVe embeddings which can be found at: http://nlp.stanford.edu/data/glove.6B.zip (source page: http://nlp.stanford.edu/projects/glove/).

This model is a modification of example code found in the Keras Github repository and released under an MIT license. For further details of this license, find it online or in this repository in the file KERAS_LICENSE.

Usage Instructions

(TODO: nthain) - Move to README

Prior to running the notebook, you must:


In [1]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import pandas as pd

from model_tool import ToxModel


Using TensorFlow backend.
HELLO from model_tool

Load Data


In [2]:
SPLITS = ['train', 'dev', 'test']

wiki = {}
debias = {}
random = {}
for split in SPLITS:
    wiki[split] = '../data/wiki_%s.csv' % split
    debias[split] = '../data/wiki_debias_%s.csv' % split
    random[split] = '../data/wiki_debias_random_%s.csv' % split

Train Models


In [9]:
hparams = {'epochs': 4}

Random model


In [10]:
MODEL_NAME = 'cnn_debias_random_tox_v3'
debias_random_model = ToxModel(hparams=hparams)
debias_random_model.train(random['train'], random['dev'], text_column = 'comment', label_column = 'is_toxic', model_name = MODEL_NAME)


Hyperparameters
---------------
max_num_words: 10000
dropout_rate: 0.3
verbose: True
cnn_pooling_sizes: [5, 5, 40]
es_min_delta: 0
learning_rate: 5e-05
es_patience: 1
batch_size: 128
embedding_dim: 100
epochs: 4
cnn_filter_sizes: [128, 128, 128]
cnn_kernel_sizes: [5, 5, 5]
max_sequence_length: 250
stop_early: True
embedding_trainable: False

Fitting tokenizer...
Tokenizer fitted!
Preparing data...
Data prepared!
Loading embeddings...
Embeddings loaded!
Building model graph...
Training model...
Train on 99157 samples, validate on 33283 samples
Epoch 1/4
Epoch 00000: val_loss improved from inf to 0.16775, saving model to ../models/cnn_debias_random_tox_v3_model.h5
217s - loss: 0.2379 - acc: 0.9179 - val_loss: 0.1677 - val_acc: 0.9395
Epoch 2/4
Epoch 00001: val_loss improved from 0.16775 to 0.15632, saving model to ../models/cnn_debias_random_tox_v3_model.h5
212s - loss: 0.1617 - acc: 0.9409 - val_loss: 0.1563 - val_acc: 0.9456
Epoch 3/4
Epoch 00002: val_loss improved from 0.15632 to 0.13708, saving model to ../models/cnn_debias_random_tox_v3_model.h5
223s - loss: 0.1434 - acc: 0.9473 - val_loss: 0.1371 - val_acc: 0.9496
Epoch 4/4
Epoch 00003: val_loss did not improve
221s - loss: 0.1315 - acc: 0.9518 - val_loss: 0.1448 - val_acc: 0.9508
Model trained!
Best model saved to ../models/cnn_debias_random_tox_v3_model.h5
Loading best model from checkpoint...
Model loaded!

In [11]:
random_test = pd.read_csv(random['test'])
debias_random_model.score_auc(random_test['comment'], random_test['is_toxic'])


Out[11]:
0.95052315456614356

Plain wikipedia model


In [12]:
MODEL_NAME = 'cnn_wiki_tox_v3'
wiki_model = ToxModel(hparams=hparams)
wiki_model.train(wiki['train'], wiki['dev'], text_column = 'comment', label_column = 'is_toxic', model_name = MODEL_NAME)


Hyperparameters
---------------
max_num_words: 10000
dropout_rate: 0.3
verbose: True
cnn_pooling_sizes: [5, 5, 40]
es_min_delta: 0
learning_rate: 5e-05
es_patience: 1
batch_size: 128
embedding_dim: 100
epochs: 4
cnn_filter_sizes: [128, 128, 128]
cnn_kernel_sizes: [5, 5, 5]
max_sequence_length: 250
stop_early: True
embedding_trainable: False

Fitting tokenizer...
Tokenizer fitted!
Preparing data...
Data prepared!
Loading embeddings...
Embeddings loaded!
Building model graph...
Training model...
Train on 95692 samples, validate on 32128 samples
Epoch 1/4
Epoch 00000: val_loss improved from inf to 0.17710, saving model to ../models/cnn_wiki_tox_v3_model.h5
215s - loss: 0.2342 - acc: 0.9180 - val_loss: 0.1771 - val_acc: 0.9358
Epoch 2/4
Epoch 00001: val_loss improved from 0.17710 to 0.14871, saving model to ../models/cnn_wiki_tox_v3_model.h5
213s - loss: 0.1642 - acc: 0.9393 - val_loss: 0.1487 - val_acc: 0.9450
Epoch 3/4
Epoch 00002: val_loss did not improve
197s - loss: 0.1455 - acc: 0.9464 - val_loss: 0.1549 - val_acc: 0.9467
Epoch 4/4
Epoch 00003: val_loss improved from 0.14871 to 0.14151, saving model to ../models/cnn_wiki_tox_v3_model.h5
2025s - loss: 0.1345 - acc: 0.9504 - val_loss: 0.1415 - val_acc: 0.9493
Model trained!
Best model saved to ../models/cnn_wiki_tox_v3_model.h5
Loading best model from checkpoint...
Model loaded!

In [13]:
wiki_test = pd.read_csv(wiki['test'])
wiki_model.score_auc(wiki_test['comment'], wiki_test['is_toxic'])


Out[13]:
0.95333902362896905

Debiased model


In [6]:
MODEL_NAME = 'cnn_debias_tox_v3'
debias_model = ToxModel(hparams=hparams)
debias_model.train(debias['train'], debias['dev'], text_column = 'comment', label_column = 'is_toxic', model_name = MODEL_NAME)


Hyperparameters
---------------
max_num_words: 10000
dropout_rate: 0.3
verbose: True
cnn_pooling_sizes: [5, 5, 40]
es_min_delta: 0
learning_rate: 5e-05
es_patience: 1
batch_size: 128
embedding_dim: 100
epochs: 4
cnn_filter_sizes: [128, 128, 128]
cnn_kernel_sizes: [5, 5, 5]
max_sequence_length: 250
stop_early: True
embedding_trainable: False

Fitting tokenizer...
Tokenizer fitted!
Preparing data...
Data prepared!
Loading embeddings...
Embeddings loaded!
Building model graph...
Training model...
Train on 99157 samples, validate on 33283 samples
Epoch 1/4
Epoch 00000: val_loss improved from inf to 0.16653, saving model to ../models/cnn_debias_tox_v3_model.h5
213s - loss: 0.2360 - acc: 0.9173 - val_loss: 0.1665 - val_acc: 0.9387
Epoch 2/4
Epoch 00001: val_loss improved from 0.16653 to 0.14609, saving model to ../models/cnn_debias_tox_v3_model.h5
221s - loss: 0.1620 - acc: 0.9411 - val_loss: 0.1461 - val_acc: 0.9476
Epoch 3/4
Epoch 00002: val_loss improved from 0.14609 to 0.13639, saving model to ../models/cnn_debias_tox_v3_model.h5
224s - loss: 0.1443 - acc: 0.9472 - val_loss: 0.1364 - val_acc: 0.9498
Epoch 4/4
Epoch 00003: val_loss did not improve
227s - loss: 0.1332 - acc: 0.9511 - val_loss: 0.1397 - val_acc: 0.9506
Model trained!
Best model saved to ../models/cnn_debias_tox_v3_model.h5
Loading best model from checkpoint...
Model loaded!

In [8]:
debias_test = pd.read_csv(debias['test'])
debias_model.score_auc(debias_test['comment'], debias_test['is_toxic'])


Out[8]:
0.9513326210562385

In [ ]: