The labeled data set consists of 50,000 IMDB movie reviews, specially selected for sentiment analysis. The sentiment of reviews is binary, meaning the IMDB rating < 5 results in a sentiment score of 0, and rating >=7 have a sentiment score of 1. No individual movie has more than 30 reviews. The 25,000 review labeled training set does not include any of the same movies as the 25,000 review test set. In addition, there are another 50,000 IMDB reviews provided without any rating labels.
In [1]:
import os
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.ensemble import RandomForestClassifier
from KaggleWord2VecUtility import KaggleWord2VecUtility # in the same folader
import pandas as pd
import numpy as np
In [2]:
traindata_path = "/Users/chengjun/bigdata/kaggle_popcorn_data/labeledTrainData.tsv"
testdata_path = "/Users/chengjun/bigdata/kaggle_popcorn_data/testData.tsv"
train = pd.read_csv(traindata_path, header=0, delimiter="\t", quoting=3)
test = pd.read_csv(testdata_path, header=0, delimiter="\t", quoting=3 )
print 'The first review is:'
print train["review"][0]
In [33]:
train[:3]
Out[33]:
In [34]:
test[:3]
Out[34]:
In [5]:
import nltk
nltk.download()
# 'Download text data sets. If you already have NLTK datasets downloaded, just close the Python download window...'
# Download text data sets, including stop words
Out[5]:
In [7]:
# Initialize an empty list to hold the clean reviews
clean_train_reviews = []
# Loop over each review; create an index i that goes from 0 to the length
# of the movie review list
print "Cleaning and parsing the training set movie reviews...\n"
for i in xrange( 0, len(train["review"])):
clean_train_reviews.append(" ".join(KaggleWord2VecUtility.review_to_wordlist(train["review"][i], True)))
In [35]:
clean_train_reviews[0]
Out[35]:
In [36]:
train['review'][0]
Out[36]:
In [8]:
# ****** Create a bag of words from the training set
# Initialize the "CountVectorizer" object, which is scikit-learn's
# bag of words tool.
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
max_features = 5000)
In [9]:
# fit_transform() does two functions: First, it fits the model
# and learns the vocabulary; second, it transforms our training data
# into feature vectors. The input to fit_transform should be a list of strings.
train_data_features = vectorizer.fit_transform(clean_train_reviews)
# Numpy arrays are easy to work with, so convert the result to an array
train_data_features = train_data_features.toarray()
In [16]:
type(train_data_features)
Out[16]:
In [20]:
len(train_data_features)
Out[20]:
In [23]:
train_data_features[1][100:105]
Out[23]:
RandomForestClassifier
在机器学习中,随机森林是一个包含多个决策树的分类器, 并且其输出的类别是由个别树输出的类别的众数而定。 Leo Breiman和Adele Cutler发展出推论出随机森林的算法。 而 "Random Forests" 是他们的商标。 这个术语是1995年由贝尔实验室的Tin Kam Ho所提出的随机决策森林(random decision forests)而来的。这个方法则是结合 Breimans 的 "Bootstrap aggregating" 想法和 Ho 的"random subspace method"以建造决策树的集合.
In [17]:
from sklearn.cross_validation import cross_val_score
forest_val = RandomForestClassifier(n_estimators = 100)
scores = cross_val_score(forest_val, train_data_features, train["sentiment"], cv = 3)
scores.mean()
Out[17]:
In [18]:
scores
Out[18]:
In [10]:
# ******* Train a random forest using the bag of words
# Initialize a Random Forest classifier with 100 trees
forest = RandomForestClassifier(n_estimators = 100)
# Fit the forest to the training set, using the bag of words as
# features and the sentiment labels as the response variable
# This may take a few minutes to run
forest = forest.fit( train_data_features, train["sentiment"] )
In [11]:
# Create an empty list and append the clean reviews one by one
clean_test_reviews = []
for i in xrange(0,len(test["review"])):
clean_test_reviews.append(" ".join(KaggleWord2VecUtility.review_to_wordlist(test["review"][i], True)))
In [25]:
len(clean_test_reviews)
Out[25]:
In [27]:
clean_test_reviews[0]
Out[27]:
In [28]:
test['review'][0]
Out[28]:
In [12]:
# Get a bag of words for the test set, and convert to a numpy array
test_data_features = vectorizer.transform(clean_test_reviews)
test_data_features = test_data_features.toarray()
In [32]:
test_data_features[3]
Out[32]:
In [14]:
# Use the random forest to make sentiment label predictions
result = forest.predict(test_data_features)
# Copy the results to a pandas dataframe with an "id" column and a "sentiment" column
output = pd.DataFrame( data={"id":test["id"], "sentiment":result} )
# Use pandas to write the comma-separated output file
output.to_csv('/Users/chengjun/github/cjc2016/data/Bag_of_Words_model.csv', index=False, quoting=3)
In [ ]: