Fraud detection machine learning on Enron enteprise dataset

Table of Contents

  1. Abstract
  2. Reprodutibility Environments and Fundamental Steps
  3. Methods and Procedures
    1. Naive Bayes
    2. SVM
    3. Decision Tree
  4. Summury of Result
  5. References

1. Abstract

The purpose of this project is to provide a reproducible paper regarding studies on how well Naive Bayes, SVM, and Decision Tree Machine Learning Algorithms can indentify emails by their authors using a pre-processed list of email texts and the corresponding authors based on the text dataset(comprised of 146 users with 21 features each) of the famous fraud scandal of the american bankrupt Enron Corporation. We will also study ways to work with parameters to improve accuracy and performance.

NB: All contents and instructions used for this paper where based on the "Udacity - Introduction to Machine Leaning course", and were adaped according to the goals explained here. This is being used for educational pourposes only.

For more information on the history of the coorporation, please verify the link below:
http://www.investopedia.com/updates/enron-scandal-summary/

2. Environment, Best Practices and Fundamental Steps

This project is based on the following tools: git version 2.7.4, anaconda 4.3.1 (64-bit), Jupyter Notebook Server 4.3.1, Python 2.7.13, scikit-learn library.

The experiments can be reproduced in three distinct manners: through anaconda installation, through docker and oracle virtual box.

Please, read the following link for best pratices concerning projects with this environment and also key setups procedures: https://github.com/ecalio07/enron-paper/blob/master/BEST_PRACTICES.md

3. Methods and Procedures

It will be performed arguments confirguration according to each classifier below so as to reach best time performance and accurance, as well as comparisons of results.

We have a set of emails, half of which were written by one person and the other half by another person at the same company . Our objective is to classify the emails as written by one person or the other based only on the text of the email.

In order to know which algorithm is best for this situation, we should make tests and by the results determine which one is most suitable for our scenario.

A couple of years ago, J.K. Rowling (of Harry Potter fame) tried something interesting. She wrote a book, “The Cuckoo’s Calling,” under the name Robert Galbraith. The book received some good reviews, but no one paid much attention to it--until an anonymous tipster on Twitter said it was J.K. Rowling. The London Sunday Times enlisted two experts to compare the linguistic patterns of “Cuckoo” to Rowling’s “The Casual Vacancy,” as well as to books by several other authors. After the results of their analysis pointed strongly toward Rowling as the author, the Times directly asked the publisher if they were the same person, and the publisher confirmed. The book exploded in popularity overnight.

We’ll do something very similar in this project. We have a set of emails, half of which were written by one person and the other half by another person at the same company . Our objective is to classify the emails as written by one person or the other based only on the text of the email. We will start with Naive Bayes in this mini-project, and then expand in later projects to other algorithms.

3.1. Naive Bayes

It is consider the holy grail of probrabilist inference. It is based on Revend Thomas Bayes who used this principles (Bayes Rules) to infer the existence of God. He created a family of methods who influenced artificial inteligence and statistics. It uses in its algorithm the concepts of sensitivity and specitivity.

Naive Bayes is a supervised classification algorithm used substancially in learning from documents (text learning). Each word is considered a feature and user names are considered the labes. It is called Naive because it ignores the words order.

The classifier uses Posterior Probability, giving the rank occurance provided text. In order words, it will be trained with frequent texts(features) used by Chris and Sarah(labels), and it will calculate the probabily and determine if each test email is from Chris or Sara.


In [29]:
import pickle
import numpy
%run bayes_31-05-17.ipynb
# arr = numpy.load('/tmp/123.npy')
# for itmens in arr:
#  print itmens


no. of Chris training emails: 7936
no. of Sara training emails: 7884
training time: 0.685 s
predicting time: 0.097 s
0.973265073948
NUMBER OF PREDICTIONS FOR CRIS 906
NUMBER OF PREDICTIONS FOR SARAH 852

3.2. SVM

It separate two classes creating a line separator(decision boundary), handling well margims and outliers.

For information on Parameters, Advantages and Disadvantages: http://scikit-learn.org/stable/modules/svm.html

For this experiment we will work on changing values for paremeter the parameters C, kernel and gamma. when initiating SVC function. It can be a simple choice with few parameter (ex 1), multiple paramenter (ex 2) or no parameters at all.

  • ex 1
    linear_kernel_svm = svm.SVC(kernel='rbf', C=10000.)

  • ex 2
    linear_kernel_svm = svm.SVC(C=1.0, kernel='rbf', degree=3, gamma='auto', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=None, random_state=None)[source]

In machine learning we should avoid OVERFITTING. Because of that, we wil tune the parameters below since all of them affect overfitting and results like accuracy, performance.

C: controls the tradeoff between smooth decision boundary and classification training points correctly. In theory, a large value of C means that you will get more training points correctly.

gamma: defines how far a the influence of a single training example reaches. If gamma has a low value, every point has a far reach. If gamma has a high value, each training example has a close reach. High value might make the decision boundary less linear, for it will be closer to training points.

kernel parameter can be ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used.

Please refer to the following url for more information on Parameters:
http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC

Experimento focusing on Accuracy vs Performance

When accuracy is most important..
However, if performance is a key, ..

Click here for steps to reproduce this experiment

por o RESULTADO

Method focusing on Gamma parameter

When accuracy is most important..
However, if performance is a key, ..

Click here for steps to reproduce this experiment

Method focusing on Kernel parameter

When accuracy is most important..
However, if performance is a key, ..

Click here for steps to reproduce this experiment

Method focusing on C parameter

When accuracy is most important..
However, if performance is a key, ..

Click here for steps to reproduce this experiment

3.3. Decision Tree

  • make tests with percentile parameter

Advantages and Disadvantages: http://scikit-learn.org/stable/modules/tree.html
Parameters Information: http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier

Main parameters covered in this experiment will be:

  • min_samples_split: it controls how deep the tree will reach
  • percentile:

Method focusing on "min_samples_split" parameter

When accuracy is most important..
However, if performance is a key, ..

Click here for steps to reproduce this experiment

Method focusing on "percentile" parameter

When accuracy is most important..
However, if performance is a key, ..

Click here for steps to reproduce this experiment

7. Summury of Result

Naive Bayes is really easy to implement and efficient. The relative simplicity of the algorithm and the independent features assumption of Naive Bayes make it a strong performer for classifying texts. It is good when working with a lot of noise of the data. On the other hand, it can break for some phrases for considering the words individually.

SVM works very well in complicated domains with clear margin of separation but it doesn't perform well in very large datasets, for it can become slow and prone to overfitting. As for tunning we can conclude that best accuracy were achieved with parameters RBF kernel, C=10000, and full dataset. As for performance, there will always be a tradeoff with accuracy reducing the dataset to make the code faster.

Decision Trees are easy to use but are prone to overfitting.