Michaël Defferrard, PhD student, EPFL LTS2
Theme of the exercise: understand the impact of your communication on social networks. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next AdWords campaign.
This notebook is the second part of the exercise. Given the data we collected from Facebook an Twitter in the last exercise, we will construct an ML model and evaluate how good it is to predict the number of likes of a post / tweet given the content.
In [1]:
import pandas as pd
import numpy as np
from IPython.display import display
import os.path
folder = os.path.join('..', 'data', 'social_media')
# Your code here.
fb = pd.read_sql('facebook', 'sqlite:///' + os.path.join(folder, 'facebook.sqlite'))
tw = pd.read_sql('twitter', 'sqlite:///' + os.path.join(folder, 'twitter.sqlite'))
n, d = fb.shape
print('The data is a {} with {} samples of dimensionality {}.'.format(type(fb), n, d))
First step: transform the data into a format understandable by the machine. What to do with text ? A common choice is the so-called bag-of-word model, where we represent each word a an integer and simply count the number of appearances of a word into a document.
Example
Let's say we have a vocabulary represented by the following correspondance table.
Integer | Word |
---|---|
0 | unknown |
1 | dog |
2 | school |
3 | cat |
4 | house |
5 | work |
6 | animal |
Then we can represent the following document
I have a cat. Cats are my preferred animals.
by the vector $x = [6, 0, 0, 2, 0, 0, 1]^T$.
Tasks
Tip: the natural language modeling libraries nltk and gensim are useful for advanced operations. You don't need them here.
Arise a first data cleaning question. We may have some text in french and other in english. What do we do ?
In [13]:
#Data cleaning
for i in range(len(fb)):
if fb['text'][[i] == 'http' :#or fb[i] == 'the':
fb.icol(i)
else:
continue
In [ ]:
In [2]:
from sklearn.feature_extraction.text import CountVectorizer
import re
nwords = 100
# Your code here.
vectorizer = CountVectorizer(max_features = nwords)
#----------------------------------------------------------------------------------------------
fb_text_vec = vectorizer.fit_transform(fb['text'])
fb_text_vectorized = fb_text_vec.toarray()
fb_words = vectorizer.get_feature_names()
#data cleaning
fb_words.remove('http')
freqs = [(word, fb_text_vec.getcol(idx).sum()) for word, idx in vectorizer.vocabulary_.items()]
fb_Most_used = sorted(freqs, key = lambda x: -x[1])
#----------------------------------------------------------------------------------------------
tw_text_vec = vectorizer.fit_transform(tw['text'])
tw_text_vectorized = tw_text_vec.toarray()
tw_words = vectorizer.get_feature_names()
#data cleaning
tw_words.remove('rt')
freqs = [(word, tw_text_vec.getcol(idx).sum()) for word, idx in vectorizer.vocabulary_.items()]
tw_Most_used = sorted(freqs, key = lambda x: -x[1])
Exploration question: what are the 5 most used words ? Exploring your data while playing with it is a useful sanity check.
In [10]:
b = vectorizer.vocabulary_.get('2016')
print(fb_Most_used[:5])
print(tw_Most_used[:5])
Out[10]:
In [ ]:
In [11]:
# Your code here.
X = tw_text_vectorized
X = X.astype(np.float)
#X -= X.mean(axis=0)
#X /= X.std(axis=0)
y = tw['likes']
y = y.astype(np.float)
In [12]:
# Training and testing sets.
test_size = round(len(X)/2)
print('Split: {} testing and {} training samples'.format(test_size, y.size - test_size))
perm = np.random.permutation(y.size)
X_test = X[:test_size]
X_train = X[test_size:]
y_test = y[perm[:test_size]]
y_train = y[perm[test_size:]]
Using numpy
, fit and evaluate the linear model $$\hat{w}, \hat{b} = \operatorname*{arg min}_{w,b} \| Xw + b - y \|_2^2.$$
Please define a class LinearRegression
with two methods:
fit
learn the parameters $w$ and $b$ of the model given the training examples.predict
gives the estimated number of likes of a post / tweet. That will be used to evaluate the model on the testing set.To evaluate the classifier, create an accuracy(y_pred, y_true)
function which computes the mean squared error $\frac1n \| \hat{y} - y \|_2^2$.
Hint: you may want to use the function scipy.sparse.linalg.spsolve()
.
In [13]:
import scipy.sparse
class RidgeRegression(object):
"""Our ML model."""
def __init__(self, alpha=0):
"The class' constructor. Initialize the hyper-parameters."
self.a = alpha
def predict(self, X):
"""Return the predicted class given the features."""
return np.sign(X.dot(self.w) + self.b)
def fit(self, X, y):
"""Learn the model's parameters given the training data, the closed-form way."""
n, d = X.shape
self.b = np.mean(y)
Ainv = np.linalg.inv(X.T.dot(X) + self.a * np.identity(d))
self.w = Ainv.dot(X.T).dot(y - self.b)
def loss(self, X, y, w=None, b=None):
"""Return the current loss.
This method is not strictly necessary, but it provides
information on the convergence of the learning process."""
w = self.w if w is None else w # The ternary conditional operator
b = self.b if b is None else b # makes those tests concise.
import autograd.numpy as np # See below for autograd.
return np.linalg.norm(np.dot(X, w) + b - y)**2 + self.a * np.linalg.norm(w, 2)**2
Interpretation: what are the most important words a post / tweet should include ?
In [15]:
# Your code here.
import sklearn.metrics
neigh = RidgeRegression()
neigh.fit(X_train, y_train)
y_predTest = neigh.predict(X_train)
train_accuracy = sklearn.metrics.accuracy_score(y_test, y_predTest)
y_pred = neigh.predict(X_test)
test_accuracy = sklearn.metrics.accuracy_score(y_test, y_pred)
print('train_accuracy',train_accuracy)
print('test_accuracy',test_accuracy)
In [ ]:
import ipywidgets
from IPython.display import clear_output
# Your code here.
In [16]:
from sklearn import linear_model, metrics
# Your code here.
neigh = sklearn.linear_model.LogisticRegression()
neigh.fit(X_train, y_train)
y_predTest = neigh.predict(X_train)
train_accuracy = sklearn.metrics.accuracy_score(y_train, y_predTest)
y_pred = neigh.predict(X_test)
test_accuracy = sklearn.metrics.accuracy_score(y_test, y_pred)
print('train_accuracy',train_accuracy)
print('test_accuracy',test_accuracy)
In [ ]:
import os
os.environ['KERAS_BACKEND'] = 'theano' # tensorflow
import keras
# Your code here.
Use matplotlib to plot a performance visualization. E.g. the true number of likes and the real number of likes for all posts / tweets.
What do you observe ? What are your suggestions to improve the performance ?
In [ ]:
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
# Your code here.