In [1]:
import pandas as pd
import sklearn
from bs4 import BeautifulSoup
import re
from nltk.corpus import stopwords

In [2]:
train = pd.read_csv("./data/labeledTrainData.tsv", delimiter="\t")

In [3]:
train.head()


Out[3]:
id sentiment review
0 5814_8 1 With all this stuff going down at the moment w...
1 2381_9 1 \The Classic War of the Worlds\" by Timothy Hi...
2 7759_3 0 The film starts with a manager (Nicholas Bell)...
3 3630_4 0 It must be assumed that those who praised this...
4 9495_8 1 Superbly trashy and wondrously unpretentious 8...

In [4]:
train.shape


Out[4]:
(25000, 3)

In [5]:
train["review"][0][:600]


Out[5]:
"With all this stuff going down at the moment with MJ i've started listening to his music, watching the odd documentary here and there, watched The Wiz and watched Moonwalker again. Maybe i just want to get a certain insight into this guy who i thought was really cool in the eighties just to maybe make up my mind whether he is guilty or innocent. Moonwalker is part biography, part feature film which i remember going to see at the cinema when it was originally released. Some of it has subtle messages about MJ's feeling towards the press and also the obvious message of drugs are bad m'kay.<br /><"

In [6]:
# 定义与处理函数
def review_to_words(review, remove_stopwords=False):
    # 去掉 html
    review_text = BeautifulSoup(review, "html5lib").get_text()
    # 去掉 none letter
    letters_only = re.sub("[^a-zA-Z]", " ", review_text)
    # 转换大小写并分割
    words = letters_only.lower().split()
    # stop_words
    stops = set(stopwords.words("english"))
    # 删除 stop_words
    meaningful_words = [w for w in words if not w in stops]
    
    return " ".join(meaningful_words)

In [7]:
print("开始清洗并解析影评......")
num_reviews = train["review"].size
clean_train_reviews = []

for i in range(num_reviews):
    if((i + 1) % 5000 == 0):
        print("影评 {} of {}".format(i, num_reviews))
    clean_train_reviews.append(review_to_words(train["review"][i]))


开始清洗并解析影评......
影评 4999 of 25000
影评 9999 of 25000
影评 14999 of 25000
影评 19999 of 25000
影评 24999 of 25000

In [8]:
clean_train_reviews[0]


Out[8]:
'stuff going moment mj started listening music watching odd documentary watched wiz watched moonwalker maybe want get certain insight guy thought really cool eighties maybe make mind whether guilty innocent moonwalker part biography part feature film remember going see cinema originally released subtle messages mj feeling towards press also obvious message drugs bad kay visually impressive course michael jackson unless remotely like mj anyway going hate find boring may call mj egotist consenting making movie mj fans would say made fans true really nice actual feature film bit finally starts minutes excluding smooth criminal sequence joe pesci convincing psychopathic powerful drug lord wants mj dead bad beyond mj overheard plans nah joe pesci character ranted wanted people know supplying drugs etc dunno maybe hates mj music lots cool things like mj turning car robot whole speed demon sequence also director must patience saint came filming kiddy bad sequence usually directors hate working one kid let alone whole bunch performing complex dance scene bottom line movie people like mj one level another think people stay away try give wholesome message ironically mj bestest buddy movie girl michael jackson truly one talented people ever grace planet guilty well attention gave subject hmmm well know people different behind closed doors know fact either extremely nice stupid guy one sickest liars hope latter'

In [9]:
# 词袋
from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(
    analyzer = "word",
    tokenizer = None,
    preprocessor = None,
    stop_words = None,
    max_features = 5000
)

train_data_features = vectorizer.fit_transform(clean_train_reviews)
train_data_features = train_data_features.toarray()

In [10]:
train_data_features.shape


Out[10]:
(25000, 5000)

In [11]:
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators = 100) 
forest = forest.fit( train_data_features, train["sentiment"])

In [12]:
# Read the test data
test = pd.read_csv("./data/testData.tsv", header=0, delimiter="\t", \
                   quoting=3 )

# Verify that there are 25,000 rows and 2 columns
print(test.shape)

# Create an empty list and append the clean reviews one by one
num_reviews = len(test["review"])
clean_test_reviews = [] 

print("Cleaning and parsing the test set movie reviews...\n")
for i in range(num_reviews):
    if( (i+1) % 5000 == 0 ):
        print("Review {} of {}\n".format(i+1, num_reviews))
    clean_review = review_to_words( test["review"][i] )
    clean_test_reviews.append( clean_review )

# Get a bag of words for the test set, and convert to a numpy array
test_data_features = vectorizer.transform(clean_test_reviews)
test_data_features = test_data_features.toarray()

# Use the random forest to make sentiment label predictions
result = forest.predict(test_data_features)

# Copy the results to a pandas dataframe with an "id" column and
# a "sentiment" column
output = pd.DataFrame( data={"id":test["id"], "sentiment":result} )

# Use pandas to write the comma-separated output file
output.to_csv( "Bag_of_Words_model.csv", index=False, quoting=3 )


(25000, 2)
Cleaning and parsing the test set movie reviews...

Review 5000 of 25000

Review 10000 of 25000

Review 15000 of 25000

Review 20000 of 25000

Review 25000 of 25000


In [ ]: