Определение авторства статьи на основе нейронных сетей

Постановка задачи:

  • ......
  • ......

Распоковка датасета


In [1]:
!unzip -u dataset.zip
print('Success')


Archive:  dataset.zip
Success

In [2]:
!zip -r datasetCSV.zip datasetHabrahabr.csv


  adding: datasetHabrahabr.csv (deflated 76%)

Импорт пакетов


In [1]:
from matplotlib import pyplot as plt
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from sklearn.model_selection import train_test_split
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense, Activation, Embedding, TimeDistributed, Bidirectional
from keras.layers import LSTM, SpatialDropout1D, Conv1D, GlobalMaxPooling1D, MaxPooling1D, Flatten
from keras.layers.core import Dropout
from keras.callbacks import EarlyStopping
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import keras
import codecs
import os

%matplotlib inline


Using TensorFlow backend.

Загрузка готового набора данных


In [2]:
data = pd.read_csv('datasetHabrahabr.csv')
data.head()


Out[2]:
Author Text TextLem
0 25 В шестой части серии учебных материалов, посв... в шесть часть серия учебный материалов, посвят...
1 25 С каждым новом поколением процессоры Intel вб... с каждый новый поколение процессор intel вбира...
2 25 Статья в блоге Intel «Прокачай свой жесткий д... статья в блог intel «прокачать свой жёсткий ди...
3 25 Испанская компания с говорящим названием Geek... испанский компания с говорящий название geeksp...
4 25 igzip — высокопроизводительная библиотека для... igzip — высокопроизводительный библиотека для ...

Загрузка тестовых документов


In [3]:
def get_dataset_from_files():
    #path = 'D:\Разработка\DataScience\Habrahabr'
    path = 'Habrahabr/'
    files = os.listdir(path)
    data_frame = pd.DataFrame()

    for file_name in files:
        file_obj = codecs.open(path + file_name, "r", "utf_8_sig" )
        file_temp = file_obj.read()
        url = file_temp[file_temp.find('url:') + 5:file_temp.find('title:')].rstrip()
        title = file_temp[file_temp.find('title:') + 7:file_temp.find('text:')].rstrip()
        text = file_temp[file_temp.find('text:') + 5:file_temp.find('author:')].rstrip()
        author = file_temp[file_temp.find('author:') + 8:].rstrip()
        row = pd.Series([url, title, text, author], index=['Url', 'Title', 'Text', 'Author'])
        data_frame = data_frame.append(row, ignore_index=True)
        file_obj.close()
        
    return data_frame

In [4]:
#data = get_dataset_from_files()

Подсчет количества слов в каждой статье


In [3]:
data['CountWords'] = data['Text'].map(lambda x: len(x.split()))
print('Количество статей в корпусе:', len(data))
data.head()


Количество статей в корпусе: 3381
Out[3]:
Author Text TextLem CountWords
0 25 В шестой части серии учебных материалов, посв... в шесть часть серия учебный материалов, посвят... 2260
1 25 С каждым новом поколением процессоры Intel вб... с каждый новый поколение процессор intel вбира... 756
2 25 Статья в блоге Intel «Прокачай свой жесткий д... статья в блог intel «прокачать свой жёсткий ди... 795
3 25 Испанская компания с говорящим названием Geek... испанский компания с говорящий название geeksp... 107
4 25 igzip — высокопроизводительная библиотека для... igzip — высокопроизводительный библиотека для ... 1534

In [4]:
data.CountWords.plot(kind='bar', figsize=(15, 5), title="Number of worlds in texts");


Количество статей у каждого пользователя


In [4]:
num_classes = 29

In [5]:
author_count_news = data.Author.value_counts()[:num_classes]
author_count_news.plot(kind='bar', figsize=(15, 5), title="Number of author's articles");


Подготовка данных для анализа

  • Добавление только топ 30 авторов
  • Удаление стоп слов
  • Лемматизация текста
  • Удаление лишних столбцов
  • One hot encoding для авторов

In [6]:
temp_data = pd.DataFrame()
names_author = author_count_news.index.values

for author in names_author:
    temp_data = temp_data.append(data[data.Author == author])

data = temp_data
print('Количество статей после удаления:', len(data))


Количество статей после удаления: 3381

In [20]:
from nltk.corpus import stopwords
stop = stopwords.words('russian')
data['Text'].apply(lambda x: ' '.join([item for item in x.split() if item not in stop]))
print('Stop words have been deleted')


Stop words have been deleted

In [22]:
import pymorphy2
morph = pymorphy2.MorphAnalyzer()
data['TextLem'] = data['Text'].map(lambda x: ' '.join([morph.parse(word)[0].normal_form for word in x.split()]))
print('The lemmatization completed')


The lemmatization completed

In [7]:
names = data.Author.value_counts().index.values

lableEnc = LabelEncoder()
lableEnc.fit(names.ravel()) 
lables = lableEnc.transform(names).reshape((num_classes, 1))

oneHotEnc = OneHotEncoder()
oneHotEnc.fit(lables)

#lableEnc.fit(names_author.ravel()) 
#lables = lableEnc.transform(names_author).reshape((num_classes, 1))
#oneHotEnc.fit(lables)

# Example encoding
#aaa = lableEnc.transform(['@saul'])
#vvv = oneHotEnc.transform(aaa).toarray()
#print(vvv)


Out[7]:
OneHotEncoder(categorical_features='all', dtype=<class 'numpy.float64'>,
       handle_unknown='error', n_values='auto', sparse=True)

In [8]:
for author in names:
    val = lableEnc.transform([author])[0]
    data.Author.replace(to_replace=author, value=val, inplace=True)

#data = data.drop(['Url', 'Title', 'CountWords'], axis=1)
data.head()


Out[8]:
Author Text TextLem CountWords
0 25 В шестой части серии учебных материалов, посв... в шесть часть серия учебный материалов, посвят... 2260
1 25 С каждым новом поколением процессоры Intel вб... с каждый новый поколение процессор intel вбира... 756
2 25 Статья в блоге Intel «Прокачай свой жесткий д... статья в блог intel «прокачать свой жёсткий ди... 795
3 25 Испанская компания с говорящим названием Geek... испанский компания с говорящий название geeksp... 107
4 25 igzip — высокопроизводительная библиотека для... igzip — высокопроизводительный библиотека для ... 1534

Сохранение датасета


In [28]:
filename = 'datasetHabrahabr.csv'
data.to_csv(filename, index=False, encoding='utf-8')

Перемешивание набора данных


In [9]:
data = data.sample(frac=1).reset_index(drop=True)

Токенизация текста


In [10]:
def get_texts_to_matrix(texts, max_features = 0):
    tokenizer = Tokenizer(split=" ", lower=True)
    if max_features != 0:
        tokenizer = Tokenizer(split=" ", lower=True, num_words=max_features)
    
    tokenizer.fit_on_texts(texts)
    matrix_tfidf = tokenizer.texts_to_matrix(texts=texts, mode='tfidf')
    print('Количество текстов:', matrix_tfidf.shape[0])
    print('Количество токенов:', matrix_tfidf.shape[1])
    return matrix_tfidf

In [11]:
def get_texts_to_sequences(text):
    # создаем единый словарь (слово -> число) для преобразования
    tokenizer = Tokenizer(split=" ", lower=True)
    tokenizer.fit_on_texts(text)
    # Преобразуем все описания в числовые последовательности, заменяя слова на числа по словарю.
    text_sequences = tokenizer.texts_to_sequences(text)
    print('В словаре {} слов'.format(len(tokenizer.word_index)))
    return text_sequences

In [12]:
def get_texts_to_gramm_sequences(texts, count_gramm = 3):
    gramms = {}
    counter_gramm = 0
    result = []
    temp_vector = []
    
    for text in texts:
        for i in range(len(text) - count_gramm - 1):
            gramm = text[i : i + count_gramm]
            if gramms.get(gramm) == None:
                gramms[gramm] = counter_gramm
                counter_gramm += 1
            temp_vector.append(gramms[gramm])
        result.append(temp_vector)
        temp_vector = []
        
    print('Количество грамм в корпусе:', len(gramms))
    #count_gramm = [len(x) for x in text_threegramm]
    #num = np.array(count_gramm)
    #num.mean()
    return result

In [13]:
#X = get_texts_to_matrix(data['Text'])
X = get_texts_to_gramm_sequences(data['Text'], count_gramm=4)
#X = get_texts_to_sequences(data['Text'])


Количество грамм в корпусе: 863374

In [14]:
means = [len(x) for x in X]
plt.plot(means)


Out[14]:
[<matplotlib.lines.Line2D at 0x7f1b3d61f400>]

Разбиваем выборку на тестовую и тренировочную


In [15]:
def get_X_y_for_traning(X, y, num_words):
    #tokenizer = Tokenizer(num_words=num_words)
    #X = tokenizer.sequences_to_matrix(X_train, mode='binary')
    X = keras.preprocessing.sequence.pad_sequences(X, maxlen=num_words) 
    y = keras.utils.to_categorical(y, num_classes)
    print('Размерность X:', X.shape) 
    print('Размерность y:', y.shape)
    return X, y

In [16]:
# Максимальное количество слов в самом длинном тексте
num_words = 30000
X_full, y_full = get_X_y_for_traning(X, data.Author, num_words)
X_train, X_test, y_train, y_test = train_test_split(X_full, y_full, test_size=0.2, random_state=42)

print('Testing set size:', len(X_test))
print('Training set size:', len(X_train))


Размерность X: (3381, 30000)
Размерность y: (3381, 29)
Testing set size: 677
Training set size: 2704

Создание модели нейронной сети


In [17]:
def get_lstm_model():
    model = Sequential()
    model.add(Embedding(270000, 300))
    model.add(SpatialDropout1D(0.3))
    model.add(LSTM(100, dropout=0.3, recurrent_dropout=0.3))
    model.add(Dense(num_classes, activation="sigmoid"))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

def get_conv_model():
    model = Sequential()
    model.add(Embedding(863374, 200))
    model.add(SpatialDropout1D(0.2))
    model.add(Conv1D(filters=512, kernel_size=3, activation='relu'))
    model.add(GlobalMaxPooling1D())
    model.add(Dense(num_classes, activation="sigmoid"))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model


def get_conv_conv_model():
    model = Sequential()
    model.add(Embedding(270000, 300))
    model.add(SpatialDropout1D(0.2))
    model.add(Conv1D(filters=512, kernel_size=3, activation='relu'))
    model.add(MaxPooling1D())
    model.add(Conv1D(filters=256, kernel_size=3, activation='relu'))
    model.add(GlobalMaxPooling1D())
    model.add(Dense(num_classes, activation="sigmoid"))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

def get_conv_lstm_model():
    model = Sequential()
    #model.add(Dense(102562, activation='relu', input_shape=(8664, 600)))
    model.add(Embedding(270000, 200))
    model.add(SpatialDropout1D(0.3))
    #model.add(TimeDistributed(Conv1D(filters=512, kernel_size=3, activation='relu')))
    #model.add(TimeDistributed(GlobalMaxPooling1D()))
    #model.add(TimeDistributed(Flatten()))
    model.add(Conv1D(filters=512, kernel_size=3, activation='relu'))
    model.add(MaxPooling1D())
    #model.add(Flatten())
    model.add(LSTM(50, dropout=0.3, recurrent_dropout=0.3))
    model.add(Dense(num_classes, activation="sigmoid"))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

def get_lstm_conv_model():
    model = Sequential()
    model.add(Embedding(270000, 300))
    model.add(SpatialDropout1D(0.2))
    model.add(LSTM(50, dropout=0.3, recurrent_dropout=0.3, return_sequences=True)) 
    #model.add(SpatialDropout1D(0.2))
    model.add(Conv1D(filters=512, kernel_size=3, activation='relu'))
    model.add(GlobalMaxPooling1D())
    model.add(Dense(num_classes, activation="sigmoid"))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

In [18]:
%%time
#model = get_lstm_model()
model = get_conv_model()
#model = get_conv_conv_model()
#model = get_conv_lstm_model()

model.summary()

BATCH_SIZE = 16
EPOCHS = 10
VERBOSE = 2

history = model.fit(X_train, y_train,
                    batch_size=BATCH_SIZE,
                    epochs=EPOCHS, verbose=VERBOSE,
                    validation_data=(X_test, y_test)
                    #validation_split=0.1, 
                    #callbacks=[EarlyStopping(monitor='val_loss')]
                   )


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, None, 200)         172674800 
_________________________________________________________________
spatial_dropout1d_1 (Spatial (None, None, 200)         0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, None, 512)         307712    
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 512)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 29)                14877     
=================================================================
Total params: 172,997,389
Trainable params: 172,997,389
Non-trainable params: 0
_________________________________________________________________
/anaconda/envs/py35/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py:90: UserWarning: Converting sparse IndexedSlices to a dense Tensor with 172674800 elements. This may consume a large amount of memory.
  "This may consume a large amount of memory." % num_elements)
Train on 2704 samples, validate on 677 samples
Epoch 1/10
215s - loss: 2.6054 - acc: 0.2922 - val_loss: 1.3804 - val_acc: 0.6470
Epoch 2/10
193s - loss: 0.7479 - acc: 0.8210 - val_loss: 0.6920 - val_acc: 0.8183
Epoch 3/10
192s - loss: 0.2053 - acc: 0.9645 - val_loss: 0.5747 - val_acc: 0.8375
Epoch 4/10
192s - loss: 0.0452 - acc: 0.9985 - val_loss: 0.4883 - val_acc: 0.8612
Epoch 5/10
192s - loss: 0.0149 - acc: 1.0000 - val_loss: 0.4622 - val_acc: 0.8759
Epoch 6/10
192s - loss: 0.0080 - acc: 1.0000 - val_loss: 0.4574 - val_acc: 0.8700
Epoch 7/10
192s - loss: 0.0053 - acc: 1.0000 - val_loss: 0.4490 - val_acc: 0.8789
Epoch 8/10
192s - loss: 0.0037 - acc: 1.0000 - val_loss: 0.4533 - val_acc: 0.8730
Epoch 9/10
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-18-800880f3afe1> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', "#model = get_lstm_model()\nmodel = get_conv_model()\n#model = get_conv_conv_model()\n#model = get_conv_lstm_model()\n\nmodel.summary()\n\nBATCH_SIZE = 16\nEPOCHS = 10\nVERBOSE = 2\n\nhistory = model.fit(X_train, y_train,\n                    batch_size=BATCH_SIZE,\n                    epochs=EPOCHS, verbose=VERBOSE,\n                    validation_data=(X_test, y_test)\n                    #validation_split=0.1, \n                    #callbacks=[EarlyStopping(monitor='val_loss')]\n                   )")

/anaconda/envs/py35/lib/python3.5/site-packages/IPython/core/interactiveshell.py in run_cell_magic(self, magic_name, line, cell)
   2118             magic_arg_s = self.var_expand(line, stack_depth)
   2119             with self.builtin_trap:
-> 2120                 result = fn(magic_arg_s, cell)
   2121             return result
   2122 

<decorator-gen-60> in time(self, line, cell, local_ns)

/anaconda/envs/py35/lib/python3.5/site-packages/IPython/core/magic.py in <lambda>(f, *a, **k)
    191     # but it's overkill for just that one bit of state.
    192     def magic_deco(arg):
--> 193         call = lambda f, *a, **k: f(*a, **k)
    194 
    195         if callable(arg):

/anaconda/envs/py35/lib/python3.5/site-packages/IPython/core/magics/execution.py in time(self, line, cell, local_ns)
   1175         else:
   1176             st = clock2()
-> 1177             exec(code, glob, local_ns)
   1178             end = clock2()
   1179             out = None

<timed exec> in <module>()

/anaconda/envs/py35/lib/python3.5/site-packages/keras/models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
    861                               class_weight=class_weight,
    862                               sample_weight=sample_weight,
--> 863                               initial_epoch=initial_epoch)
    864 
    865     def evaluate(self, x, y, batch_size=32, verbose=1,

/anaconda/envs/py35/lib/python3.5/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
   1428                               val_f=val_f, val_ins=val_ins, shuffle=shuffle,
   1429                               callback_metrics=callback_metrics,
-> 1430                               initial_epoch=initial_epoch)
   1431 
   1432     def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None):

/anaconda/envs/py35/lib/python3.5/site-packages/keras/engine/training.py in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch)
   1077                 batch_logs['size'] = len(batch_ids)
   1078                 callbacks.on_batch_begin(batch_index, batch_logs)
-> 1079                 outs = f(ins_batch)
   1080                 if not isinstance(outs, list):
   1081                     outs = [outs]

/anaconda/envs/py35/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
   2266         updated = session.run(self.outputs + [self.updates_op],
   2267                               feed_dict=feed_dict,
-> 2268                               **self.session_kwargs)
   2269         return updated[:len(self.outputs)]
   2270 

/anaconda/envs/py35/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    787     try:
    788       result = self._run(None, fetches, feed_dict, options_ptr,
--> 789                          run_metadata_ptr)
    790       if run_metadata:
    791         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/anaconda/envs/py35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    995     if final_fetches or final_targets:
    996       results = self._do_run(handle, final_targets, final_fetches,
--> 997                              feed_dict_string, options, run_metadata)
    998     else:
    999       results = []

/anaconda/envs/py35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1130     if handle is None:
   1131       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1132                            target_list, options, run_metadata)
   1133     else:
   1134       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/anaconda/envs/py35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1137   def _do_call(self, fn, *args):
   1138     try:
-> 1139       return fn(*args)
   1140     except errors.OpError as e:
   1141       message = compat.as_text(e.message)

/anaconda/envs/py35/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1119         return tf_session.TF_Run(session, options,
   1120                                  feed_dict, fetch_list, target_list,
-> 1121                                  status, run_metadata)
   1122 
   1123     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

In [23]:
print('Точность модели составляет: {}'.format(model.evaluate(X_test, y_test, batch_size=32, verbose=2)[1] * 100))


Точность модели составляет: 78.72968982558483

In [153]:
from matplotlib import pyplot as plt
print(history.history.keys())
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'valid'], loc='upper left')
plt.show();
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'valid'], loc='upper left')
plt.show();


dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])

In [99]:
def save_model(model_name):
    # Генерируем описание модели в формате json
    model_json = model.to_json()
    # Записываем модель в файл
    json_file = open("model/{}_model.json".Format(model_name), "w")
    json_file.write(model_json)
    json_file.close()
    model.save_weights("model/{}_weights.h5".Format(model_name))

In [100]:
#save_model(habra_86persent)

In [ ]: