Text retrieval

This guide will introduce techniques for organizing text data. It will show how to analyze a large corpus of text, extracting feature vectors for individual documents, in order to be able to retrieve documents with similar content.

scipy and scikit-learn are required to run through this document, as well as a corpus of text documents. This code can be adapted to work with a set of documents you collect. For the purpose of this example, we will use the well-known Reuters-21578 dataset with 90 categories. To download this dataset, download it manually from here, or run download.sh in the data folder (to get all the other data for ml4a-guides as well), or just run:

wget http://disi.unitn.it/moschitti/corpora/Reuters21578-Apte-90Cat.tar.gz
tar -xzf Reuters21578-Apte-90Cat.tar.gz

In [1]:
import os

Once you've downloaded and unzipped the dataset, take a look inside the folder. It is split into two folders, "training" and "test". Each of those contains 91 subfolders, corresponding to pre-labeled categories, which will be useful for us later when we want to try classifying the category of an unknown message. In this notebook, we are not worried about training a classifier, so we'll end up using both sets together.

Let's note the location of the folder into a variable data_dir.


In [2]:
data_dir = '../data/Reuters21578-Apte-90Cat'

Let's open up a single message and look at the contents. This is the very first message in the training folder, inside of the "acq" folder, which is a category apparently containing news of corporate acquisitions.


In [3]:
post_path = os.path.join(data_dir, "training", "acq", "0000005")
with open (post_path, "r") as p:
    raw_text = p.read()
    print(raw_text)



COMPUTER TERMINAL SYSTEMS <CPML> COMPLETES SALE

     COMMACK, N.Y., Feb 26 - Computer Terminal Systems Inc said
it has completed the sale of 200,000 shares of its common
stock, and warrants to acquire an additional one mln shares, to
<Sedio N.V.> of Lugano, Switzerland for 50,000 dlrs.
    The company said the warrants are exercisable for five
years at a purchase price of .125 dlrs per share.
    Computer Terminal said Sedio also has the right to buy
additional shares and increase its total holdings up to 40 pct
of the Computer Terminal's outstanding common stock under
certain circumstances involving change of control at the
company.
    The company said if the conditions occur the warrants would
be exercisable at a price equal to 75 pct of its common stock's
market price at the time, not to exceed 1.50 dlrs per share.
    Computer Terminal also said it sold the technolgy rights to
its Dot Matrix impact technology, including any future
improvements, to <Woodco Inc> of Houston, Tex. for 200,000
dlrs. But, it said it would continue to be the exclusive
worldwide licensee of the technology for Woodco.
    The company said the moves were part of its reorganization
plan and would help pay current operation costs and ensure
product delivery.
    Computer Terminal makes computer generated labels, forms,
tags and ticket printers and terminals.

Our collection contains over 15,000 articles with a lot of information. It would take way too long to get through all the information.


In [4]:
# this gives us all the groups (from training subfolder, but same for test)
groups = [g for g in os.listdir(os.path.join(data_dir, "training")) if os.path.isdir(os.path.join(data_dir, "training", g))]
print groups


['silver', 'cocoa', 'crude', 'coconut', 'jet', 'retail', 'coconut-oil', 'coffee', 'earn', 'copper', 'ship', 'castor-oil', 'cpi', 'sugar', 'dlr', 'hog', 'iron-steel', 'propane', 'veg-oil', 'rapeseed', 'platinum', 'dfl', 'rape-oil', 'rice', 'palmkernel', 'housing', 'zinc', 'jobs', 'tea', 'l-cattle', 'unknown', 'alum', 'lead', 'interest', 'money-supply', 'cotton', 'soybean', 'rubber', 'corn', 'soy-meal', 'orange', 'barley', 'soy-oil', 'heat', 'lei', 'pet-chem', 'gas', 'fuel', 'ipi', 'palladium', 'grain', 'oat', 'nkr', 'yen', 'groundnut-oil', 'naphtha', 'oilseed', 'rye', 'groundnut', 'wheat', 'cpu', 'gold', 'sun-meal', 'nat-gas', 'reserves', 'instal-debt', 'dmk', 'strategic-metal', 'cotton-oil', 'bop', 'wpi', 'nickel', 'tin', 'income', 'trade', 'rand', 'livestock', 'gnp', 'lumber', 'sun-oil', 'palm-oil', 'nzdlr', 'acq', 'carcass', 'copra-cake', 'potato', 'sunseed', 'sorghum', 'meal-feed', 'money-fx', 'lin-oil']

Let's load all of our documents (news articles) into a single list called docs. We'll iterate through each group, grab all of the posts in each group (from both training and test directories), and add the text of the post into the docs list. We will make sure to exclude duplicate posts by cheking if we've seen the post index before.


In [14]:
import re

docs = []
post_idx = []
for g, group in enumerate(groups):
    if g%10==0:
        print ("reading group %d / %d"%(g+1, len(groups)))
    posts_training = [os.path.join(data_dir, "training", group, p) for p in os.listdir(os.path.join(data_dir, "training", group)) if os.path.isfile(os.path.join(data_dir, "training", group, p))]
    posts_test = [os.path.join(data_dir, "test", group, p) for p in os.listdir(os.path.join(data_dir, "test", group)) if os.path.isfile(os.path.join(data_dir, "test", group, p))]
    posts = posts_training + posts_test
    for post in posts:
        idx = post.split("/")[-1]
        if idx not in post_idx:
            post_idx.append(idx)
            with open(post, "r") as p:
                raw_text = p.read()
                raw_text = re.sub(r'[^\x00-\x7f]',r'', raw_text) 
                docs.append(raw_text)

print("\nwe have %d documents in %d groups"%(len(docs), len(groups)))
print("\nhere is document 100:\n%s"%docs[100])


reading group 1 / 91
reading group 11 / 91
reading group 21 / 91
reading group 31 / 91
reading group 41 / 91
reading group 51 / 91
reading group 61 / 91
reading group 71 / 91
reading group 81 / 91
reading group 91 / 91

we have 12897 documents in 91 groups

here is document 100:


DUTCH COCOA PROCESSORS UNHAPPY WITH ICCO BUFFER

<AUTHOR>    By Jeremy Lovell, Reuters</AUTHOR>
    ROTTERDAM, June 1 - Dutch cocoa processors are unhappy with
the intermittent buying activities of the International Cocoa
Organization's buffer stock manager, industry sources told
Reuters.
    "The way he is operating at the moment is doing almost
nothing to support the market. In fact he could be said to be
actively depressing it," one company spokesman said.
    Including the 3,000 tonnes he acquired on Friday, the total
amount of cocoa bought by the buffer stock manager since he
recently began support operations totals 21,000 tonnes.
    Despite this buying, the price of cocoa is well under the
1,600 Special Drawing Rights, SDRs, a tonne level below which
the bsm is obliged to buy cocoa off the market.
    "Even before he started operations, traders estimated the
manager would need to buy at least up to his 75,000 tonnes
maximum before prices moved up to or above the 1,600 SDR level,
and yet he appears reluctant to do so," one manufacturer said.
    "We all hoped the manager would move into the market to buy
up to 75,000 tonnes in a fairly short period, and then simply
step back," he added.
    "The way the manager is only nibbling at the edge of the
market at the moment is actually depressing sentiment and the
market because everyone is holding back from both buying and
selling waiting to see what the manager will do next," one
processor said.
    "As long as his buying tactics remain the same the market is
likely to stay in the doldrums, and I see no indication he is
about to alter his methods," he added.
    Processors and chocolate manufacturers said consumer prices
for cocoa products were unlikely to be affected by buffer stock
buying for some time to come.

We will now use sklearn's TfidfVectorizer to compute the tf-idf matrix of our collection of documents. The tf-idf matrix is an nxm matrix with the n rows corresponding to our n documents and the m columns corresponding to our terms. The values corresponds to the "importance" of each term to each document, where importance is *. In this case, terms are just all the unique words in the corpus, minus english stopwords, which are the most common words in the english language, e.g. "it", "they", "and", "a", etc. In some cases, terms can be n-grams (n-length sequences of words) or more complex, but usually just words.

To compute our tf-idf matrix, run:


In [15]:
from sklearn.feature_extraction.text import TfidfVectorizer

vectorizer = TfidfVectorizer(stop_words='english')
tfidf = vectorizer.fit_transform(docs)

tfidf


Out[15]:
<12897x33972 sparse matrix of type '<type 'numpy.float64'>'
	with 740808 stored elements in Compressed Sparse Row format>

We see that the variable tfidf is a sparse matrix with a row for each document, and a column for each unique term in the corpus.

Thus, we can interpret each row of this matrix as a feature vector which describes a document. Two documents which have identical rows have the same collection of words in them, although not necessarily in the same order; word order is not preserved in the tf-idf matrix. Regardless, it seems reasonable to expect that if two documents have similar or close tf-idf vectors, they probably have similar content.


In [16]:
doc_idx = 5

doc_tfidf = tfidf.getrow(doc_idx)
all_terms = vectorizer.get_feature_names()
terms = [all_terms[i] for i in doc_tfidf.indices]
values = doc_tfidf.data

print(docs[doc_idx])
print("document's term-frequency pairs:")
print(", ".join("\"%s\"=%0.2f"%(t,v) for t,v in zip(terms,values)))



PEGASUS GOLD <PGULF> STARTS MILLING IN MONTANA

    JEFFERSON CITY, Mont., March 27 - Pegasus Gold Inc said
milling operations have started at its Montana Tunnels open-pit
gold, silver, zinc and lead mine near Helena.
    The start-up is three months ahead of schedule and six mln
dlrs under budget, the company said. Original capital cost of
the mine was 57.5 mln dlrs, but came in at 51.5 mln dlrs, the
company said.
    After a start-up period, the mill is expected to produce 
106,000 ounces of gold, 1,700,000 ounces of silver, 26,000 tons
of zinc and 5,700 tons of lead on an annual basis from
4,300,000 tons of ore, the company said.

document's term-frequency pairs:
"march"=0.03, "said"=0.09, "gold"=0.34, "silver"=0.21, "lead"=0.13, "zinc"=0.22, "ore"=0.10, "000"=0.16, "tons"=0.30, "ounces"=0.21, "dlrs"=0.09, "300"=0.07, "mln"=0.09, "expected"=0.06, "company"=0.12, "capital"=0.06, "near"=0.08, "milling"=0.25, "pegasus"=0.28, "pgulf"=0.15, "starts"=0.11, "montana"=0.27, "jefferson"=0.15, "city"=0.08, "mont"=0.13, "27"=0.06, "operations"=0.07, "started"=0.09, "tunnels"=0.16, "open"=0.08, "pit"=0.14, "helena"=0.15, "start"=0.16, "months"=0.06, "ahead"=0.09, "schedule"=0.11, "budget"=0.08, "original"=0.10, "cost"=0.08, "57"=0.09, "came"=0.09, "51"=0.08, "period"=0.07, "produce"=0.09, "106"=0.10, "700"=0.17, "26"=0.06, "annual"=0.07, "basis"=0.07

In practice however, the term-document matrix alone has several disadvantages. For one, it is very high-dimensional and sparse (mostly zeroes), thus it is computationally costly.

Additionally, it ignores similarity among groups of terms. For example, the words "seat" and "chair" are related, but in a raw term-document matrix they are separate columns. So two sentences with one of each word will not be computed as similarly.

One solution is to use latent semantic analysis (LSA, or sometimes called latent semantic indexing). LSA is a dimensionality reduction technique closely related to principal component analysis, which is commonly used to reduce a high-dimensional set of terms into a lower-dimensional set of "concepts" or components which are linear combinations of the terms.

To do so, we use sklearn's TruncatedSVD function which gives us the LSA by computing a singular value decomposition (SVD) of the tf-idf matrix.


In [17]:
from sklearn.decomposition import TruncatedSVD

lsa = TruncatedSVD(n_components=100)
tfidf_lsa = lsa.fit_transform(tfidf)

How to interpret this? lsa holds our latent semantic analysis, expressing our 100 concepts. It has a vector for each concept, which holds the weight of each term to that concept. tfidf_lsa is our transformed document matrix where each document is a weighted sum of the concepts.

In a simpler analysis with, for example, two topics (sports and tacos), one concept might assign high weights for sports-related terms (ball, score, tournament) and the other one might have high weights for taco-related concepts (cheese, tomato, lettuce). In a more complex one like this one, the concepts may not be as interpretable. Nevertheless, we can investigate the weights for each concept, and look at the top-weighted ones. For example, here are the top terms in concept 1.


In [18]:
components = lsa.components_[1]
all_terms = vectorizer.get_feature_names()

idx_top_terms = sorted(range(len(components)), key=lambda k: components[k])

print("10 highest-weighted terms in concept 1:")
for t in idx_top_terms[:10]:
    print(" - %s : %0.02f"%(all_terms[t], t))


10 highest-weighted terms in concept 1:
 - vs : 32799.00
 - 000 : 1.00
 - net : 21096.00
 - loss : 18710.00
 - cts : 8738.00
 - revs : 26257.00
 - shr : 27999.00
 - profit : 24257.00
 - avg : 4002.00
 - shrs : 28014.00

The top terms in concept 1 appear related to accounting balance sheets; terms like "net", "loss", "profit".

Now, back to our documents. Recall that tfidf_lsa is a transformation of our original tf-idf matrix from the term-space into a concept-space. The concept space is much more valuable, and we can use it to query most similar documents. We expect that two documents which about similar things should have similar vectors in tfidf_lsa. We can use a simple distance metric to measure the similarity, euclidean distance or cosine similarity being the two most common.

Here, we'll select a single query document (index 300), calculate the distance of every other document to our query document, and take the one with the smallest distance to the query.


In [22]:
from scipy.spatial import distance

query_idx = 400

# take the concept representation of our query document
query_features = tfidf_lsa[query_idx]

# calculate the distance between query and every other document
distances = [ distance.euclidean(query_features, feat) for feat in tfidf_lsa ]
    
# sort indices by distances, excluding the first one which is distance from query to itself (0)
idx_closest = sorted(range(len(distances)), key=lambda k: distances[k])[1:]

# print our results
query_doc = docs[query_idx]
return_doc = docs[idx_closest[0]]
print("QUERY DOCUMENT:\n %s \nMOST SIMILAR DOCUMENT TO QUERY:\n %s" %(query_doc, return_doc))


QUERY DOCUMENT:
 

U.S ENERGY SECRETARY PROPOSES OIL TAX INCENTIVES

    WASHINGTON, March 18 - Energy Secretary John Herrington
said he will propose tax incentives to increase domestic oil
and natural gas exploration and production to the Reagan
Administration for consideration.
    "These options boost production, while avoiding the huge
costs associated with proposals like an oil import fee,"
Herrington told a House Energy subcommittee hearing. "It is my
intention to submit these proposals to the domestic policy
council and the cabinet for consideration and review."
    He said proposals, including an increase in the oil
depletion allowance and repeal of the windfall profits tax,
should be revenue neutral and promote domestic production at
the least cost to the economy and the taxpayers.
    "The goal of the Administration policies is to increase
domestic production. I would like to shoot for one mln barrels
a year."
    The proposals were based on a DOE study released yesterday
warning the United States was threatened by a growing
dependence on oil imports.
    "We project free world dependence on Persian Gulf oil at 65
pct by 1995," Herrington said.
    He said it was too soon to say what the Administration
policy on oil tax incentives would be and indicated there would
be opposition to tax changes.
    "Of course, to move forward with these kinds of options
would require reopening tax issues settled last year (in the
tax reform bill) -- an approach which has not, in general, been
favored by the administration. I think what we need is to
debate this within the Administration," he said.
    He said the proposals might raise gasoline prices.
    Herrington did not specifically confirm a report in today's
Washington Post that he had written to President Reagan urging
an increase in the oil depletion allowance.
    Asked about the report by subcommittee members, Herrington
said various proposals were under consideration and would be
debated within the Administration to determine which would have
the most benefits at the least cost.
 
MOST SIMILAR DOCUMENT TO QUERY:
 

DOE SECRETARY PROPOSES OIL TAX INCENTIVES

    WASHINGTON, March 18 - Energy Secretary John Herrington
said he will propose tax incentives to increase domestic oil
and natural gas exploration and production to the Reagan
Administration for consideration.
    "These options boost production, while avoiding the huge
costs associated with proposals like an oil import fee,"
Herrington told a House Energy subcommittee hearing. "It is my
intention to submit these proposals to the domestic policy
council and the cabinet for consideration and review."
    "The goal of the Administration policies is to increase
domestic production. I would like to shoot for one mln barrels
a day," he said.
    The proposals were based on a DOE study released yesterday
warning the United States was threatened by a growing
dependence on oil imports.
    "We project free world dependence on Persian Gulf oil at 65
pct by 1995," Herrington said.

Interesting find! Our query document appears to be about tax incentives for domestic oil and natural gas. Our return document is a related article about the same topic. Try looking at the next few closest results. A quick inspection reveals that most of them are about the same story.

Thus we see the value of this procedure. It gives us a way to quickly identify articles which are related to each other. This can greatly aide journalists who have to sift through a lot of content which is not always indexed or organized usefully.

More creatively, we can think of other ways this can be made useful. For example, what if instead of making our documents the articles themselves, what if they were made to be paragraphs from the articles? Then, we could potentially discover relevant paragraphs about one topic which are buried in an article which is otherwise about a different topic. We can combine this with handcrafted filters as well (date range, presence of a word or name, etc); perhaps you want to quickly find every quote politican X has made about topic Y. This provides an effective means to do so.