A Python Tour of Data Science: Data Exploitation

Michaël Defferrard, PhD student, EPFL LTS2

Exercise: problem definition

Theme of the exercise: understand the impact of your communication on social networks. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next AdWords campaign.

This notebook is the second part of the exercise. Given the data we collected from Facebook an Twitter in the last exercise, we will construct an ML model and evaluate how good it is to predict the number of likes of a post / tweet given the content.

1 Data importation

  1. Use pandas to import the facebook.sqlite and twitter.sqlite databases.
  2. Print the 5 first rows of both tables.

The facebook.sqlite and twitter.sqlite SQLite databases can be created by running the data acquisition and exploration exercise.


In [1]:
import pandas as pd
import numpy as np
from IPython.display import display
import os.path

folder = os.path.join('..', 'data', 'social_media')

fb = pd.read_sql('facebook', 'sqlite:///' + os.path.join(folder, 'facebook.sqlite'), index_col='index')
tw = pd.read_sql('twitter', 'sqlite:///' + os.path.join(folder, 'twitter.sqlite'), index_col='index')

display(fb[:5])
display(tw[:5])


id text time likes comments
index
0 995189307173864_1569040983122024 HelloTomorrow - showcasing flooring as a servi... 2016-10-14 10:41:42 3 0
1 995189307173864_1567048186654637 Amongst the TOP500 worldwide tech startups #pa... 2016-10-13 07:12:08 49 0
2 995189307173864_1530528866973236 Meet Technis at Beaulieu Lausanne #comptoirsu... 2016-09-17 12:02:44 32 0
3 995189307173864_1526804424012347 Le gagnant du concours Technis au Comptoir Sui... 2016-09-14 07:44:17 0 8
4 995189307173864_1489719734387483 Gold for 🇦🇷Del Potro🇦🇷 or 🇬🇧Murray🇬🇧? 2016-08-14 20:46:05 6 0
id text time likes shares
index
0 789383850921078784 RT @MyTechnis: @MyTechnis Flooring as a servic... 2016-10-21 08:32:30 0 6
1 786851711008837632 @MyTechnis Flooring as a service (FaaS) enchan... 2016-10-14 08:50:41 8 6
2 786837466120654848 RT @wikbou: Innovation is changing the world. ... 2016-10-14 07:54:05 0 1
3 786837128466554880 RT @MaHo_Pallini: "Make sure your entrepreneur... 2016-10-14 07:52:44 0 1
4 786490290932547584 RT @Innovaud: Bravo! Technis @MyTechnis déjà p... 2016-10-13 08:54:32 0 2

2 Vectorization

First step: transform the data into a format understandable by the machine. What to do with text ? A common choice is the so-called bag-of-word model, where we represent each word a an integer and simply count the number of appearances of a word into a document.

Example

Let's say we have a vocabulary represented by the following correspondance table.

Integer Word
0 unknown
1 dog
2 school
3 cat
4 house
5 work
6 animal

Then we can represent the following document

I have a cat. Cats are my preferred animals.

by the vector $x = [6, 0, 0, 2, 0, 0, 1]^T$.

Tasks

  1. Construct a vocabulary of the 100 most occuring words in your dataset.
  2. Build a vector $x \in \mathbb{R}^{100}$ for each document (post or tweet).

Tip: the natural language modeling libraries nltk and gensim are useful for advanced operations. You don't need them here.

Arise a first data cleaning question. We may have some text in french and other in english. What do we do ?


In [2]:
from sklearn.feature_extraction.text import CountVectorizer

nwords = 200  # 100

def compute_bag_of_words(text, nwords):
    vectorizer = CountVectorizer(max_features=nwords)
    vectors = vectorizer.fit_transform(text)
    vocabulary = vectorizer.get_feature_names()
    return vectors, vocabulary

fb_bow, fb_vocab = compute_bag_of_words(fb.text, nwords)
#fb_p = pd.Panel({'orig': fb, 'bow': fb_bow})
display(fb_bow)
display(fb_vocab[100:110])

tw_bow, tw_vocab = compute_bag_of_words(tw.text, nwords)
display(tw_bow)


<52x200 sparse matrix of type '<class 'numpy.int64'>'
	with 602 stored elements in Compressed Sparse Row format>
['play',
 'positions',
 'possibility',
 'potentially',
 'potro',
 'pour',
 'ppllulxlpy',
 'première',
 'presentation',
 'prize']
<77x200 sparse matrix of type '<class 'numpy.int64'>'
	with 760 stored elements in Compressed Sparse Row format>

Exploration question: what are the 5 most used words ? Exploring your data while playing with it is a useful sanity check.


In [3]:
def print_most_frequent(bow, vocab, n=10):
    idx = np.argsort(bow.sum(axis=0))
    for i in range(10):
        j = idx[0, -i]
        print(vocab[j])

print_most_frequent(tw_bow, tw_vocab)
print('---')
print_most_frequent(fb_bow, fb_vocab)


04gv3nwwyv
co
https
mytechnis
rt
the
technis
of
for
http
---
000
the
technis
to
http
and
co
https
of
for

3 Pre-processing

  1. The independant variables $X$ are the bags of words.
  2. The target $y$ is the number of likes.
  3. Split in half for training and testing sets.

In [4]:
X = tw_bow
y = tw['likes'].values

n, d = X.shape
assert n == y.size

print(X.shape)
print(y.shape)


(77, 200)
(77,)

In [5]:
# Training and testing sets.
test_size = n // 2
print('Split: {} testing and {} training samples'.format(test_size, y.size - test_size))
perm = np.random.permutation(y.size)
X_test  = X[perm[:test_size]]
X_train = X[perm[test_size:]]
y_test  = y[perm[:test_size]]
y_train = y[perm[test_size:]]


Split: 38 testing and 39 training samples

4 Linear regression

Using numpy, fit and evaluate the linear model $$\hat{w}, \hat{b} = \operatorname*{arg min}_{w,b} \| Xw + b - y \|_2^2.$$

Please define a class LinearRegression with two methods:

  1. fit learn the parameters $w$ and $b$ of the model given the training examples.
  2. predict gives the estimated number of likes of a post / tweet. That will be used to evaluate the model on the testing set.

To evaluate the classifier, create an accuracy(y_pred, y_true) function which computes the mean squared error $\frac1n \| \hat{y} - y \|_2^2$.

Hint: you may want to use the function scipy.sparse.linalg.spsolve().

If solve and spsolve tells you that your matrix is singular, please read this good comment. Potential solutions:

  1. Is there any post / tweet without any word from the vocabulary ? I.e. a row of $X$ made only of zeroes. If yes, remove this row or enlarge the vocabulary.
  2. Identify and remove redundant features, i.e. words, who are linear combinations of others.
  3. What else could we do ?

In [13]:
import scipy.sparse

class LinearRegression(object):
    
    def predict(self, X):
        """Return the predicted class given the features."""
        return X.dot(self.w) + self.b
    
    def fit(self, X, y):
        """Learn the model's parameters given the training data, the closed-form way."""
        n, d = X.shape
        self.b = y.mean()
        A = X.T.dot(X)
        b = X.T.dot(y - self.b)
        #self.w = np.linalg.solve(A, b)
        self.w = scipy.sparse.linalg.spsolve(A, b)

def evaluate(y_pred, y_true):
    return np.linalg.norm(y_pred - y_true, ord=2)**2 / y_true.size

model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

mse = evaluate(y_pred, y_test)
print('mse: {:.4f}'.format(mse))


mse: nan
/Users/malogrisard/anaconda/lib/python3.5/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:155: MatrixRankWarning: Matrix is exactly singular
  warn("Matrix is exactly singular", MatrixRankWarning)

Interpretation: what are the most important words a post / tweet should include ?


In [7]:
idx = np.argsort(abs(model.w))

for i in range(20):
    j = idx[-1-i]
    print('weight: {:5.2f}, word: {}'.format(model.w[j], tw_vocab[j]))


weight:   nan, word: your
weight:   nan, word: minister
weight:   nan, word: par
weight:   nan, word: our
weight:   nan, word: or
weight:   nan, word: on
weight:   nan, word: of
weight:   nan, word: night
weight:   nan, word: next
weight:   nan, word: nadal
weight:   nan, word: mytechnis
weight:   nan, word: more
weight:   nan, word: merci
weight:   nan, word: you
weight:   nan, word: masschallengech
weight:   nan, word: maho_pallini
weight:   nan, word: live
weight:   nan, word: les
weight:   nan, word: lavercup
weight:   nan, word: laver

5 Interactivity

  1. Create a slider for the number of words, i.e. the dimensionality of the samples $x$.
  2. Print the accuracy for each change on the slider.

In [8]:
import ipywidgets
from IPython.display import clear_output

slider = ipywidgets.widgets.IntSlider(
    value=1,
    min=1,
    max=nwords,
    step=1,
    description='nwords',
)

def handle(change):
    """Handler for value change: fit model and print performance."""
    nwords = change['new']
    clear_output()
    print('nwords = {}'.format(nwords))
    model = LinearRegression()
    model.fit(X_train[:, :nwords], y_train)
    y_pred = model.predict(X_test[:, :nwords])
    mse = evaluate(y_pred, y_test)
    print('mse: {:.4f}'.format(mse))

slider.observe(handle, names='value')
display(slider)

slider.value = nwords  # As if someone moved the slider.


nwords = 200
mse: nan
/Users/malogrisard/anaconda/lib/python3.5/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:155: MatrixRankWarning: Matrix is exactly singular
  warn("Matrix is exactly singular", MatrixRankWarning)

6 Scikit learn

  1. Fit and evaluate the linear regression model using sklearn.
  2. Evaluate the model with the mean squared error metric provided by sklearn.
  3. Compare with your implementation.

In [9]:
from sklearn import linear_model, metrics

model = linear_model.LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

mse = metrics.mean_squared_error(y_test, y_pred)
assert np.allclose(evaluate(y_pred, y_test), mse)
print('mse: {:.4f}'.format(mse))


mse: 21.1223

7 Deep Learning

Try a simple deep learning model !

Another modeling choice would be to use a Recurrent Neural Network (RNN) and feed it the sentence words after words.


In [10]:
import os
os.environ['KERAS_BACKEND'] = 'theano'  # tensorflow
import keras

model = keras.models.Sequential()
model.add(keras.layers.Dense(output_dim=50, input_dim=nwords, activation='relu'))
model.add(keras.layers.Dense(output_dim=20, activation='relu'))
model.add(keras.layers.Dense(output_dim=1, activation='relu'))
model.compile(loss='mse', optimizer='sgd')

model.fit(X_train.toarray(), y_train, nb_epoch=20, batch_size=100)
y_pred = model.predict(X_test.toarray(), batch_size=32)

mse = evaluate(y_test, y_pred.squeeze())
print('mse: {:.4f}'.format(mse))


Using Theano backend.
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
Epoch 1/20
39/39 [==============================] - 0s - loss: 15.3521
Epoch 2/20
39/39 [==============================] - 0s - loss: 14.6964
Epoch 3/20
39/39 [==============================] - 0s - loss: 13.9968
Epoch 4/20
39/39 [==============================] - 0s - loss: 13.3878
Epoch 5/20
39/39 [==============================] - 0s - loss: 12.8100
Epoch 6/20
39/39 [==============================] - 0s - loss: 12.2525
Epoch 7/20
39/39 [==============================] - 0s - loss: 11.7433
Epoch 8/20
39/39 [==============================] - 0s - loss: 11.2761
Epoch 9/20
39/39 [==============================] - 0s - loss: 10.8619
Epoch 10/20
39/39 [==============================] - 0s - loss: 10.4949
Epoch 11/20
39/39 [==============================] - 0s - loss: 10.1475
Epoch 12/20
39/39 [==============================] - 0s - loss: 9.8085
Epoch 13/20
39/39 [==============================] - 0s - loss: 9.4795
Epoch 14/20
39/39 [==============================] - 0s - loss: 9.1534
Epoch 15/20
39/39 [==============================] - 0s - loss: 8.8187
Epoch 16/20
39/39 [==============================] - 0s - loss: 8.4853
Epoch 17/20
39/39 [==============================] - 0s - loss: 8.1413
Epoch 18/20
39/39 [==============================] - 0s - loss: 7.7981
Epoch 19/20
39/39 [==============================] - 0s - loss: 7.4509
Epoch 20/20
39/39 [==============================] - 0s - loss: 7.0961
mse: 19.9257

8 Evaluation

Use matplotlib to plot a performance visualization. E.g. the true number of likes and the real number of likes for all posts / tweets.

What do you observe ? What are your suggestions to improve the performance ?


In [11]:
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline

n = 100
plt.figure(figsize=(15, 5))
plt.plot(y_test[:n], '.', alpha=.7, markersize=10, label='ground truth')
plt.plot(y_pred[:n], '.', alpha=.7, markersize=10, label='prediction')
plt.legend()
plt.show()