In [10]:
import requests
import lxml.html as lh
url = 'http://www.pythonforbeginners.com/feedparser/using-feedparser-in-python'
page = requests.get(url, verify=False)
doc = lh.fromstring(page.content)
print dir(doc)
#text = doc.xpath('//p[@itemprop="articleBody"]')
#finalText = str()
#for par in text:
# finalText += par.text_content()
#print finalText
Even with the best of websites, I don’t think I’ve ever encountered a scraping job that couldn’t be described as “A small and simple general model with heaps upon piles of annoying little exceptions”
- Swizec Teller http://swizec.com/blog/scraping-with-mechanize-and-beautifulsoup/swizec/5039
In general, there are three problems that you might face when undertaking a scraping task:
The key is that you must identify which type of problem you have. After this, you must look at the HTML structure of a webpage and construct a script that will select the parts of the page that are of interest to you.
As mentioned previously, Python has various libraries for scraping tasks. The ones I have found the most useful are:
In addition you need some method to examine the source of a webpage in a structured manner. I use Chrome which, as a WebKit browser, allows for "Inspect Element" functionality.
So, let's look at some webpage source. I'm going to pick on the New York Times throughout (I thought about using the eventdata.psu.edu page...it actually has very well formatted HTML).
The easiest approach to webscraping is getting the content from a page that you know in advance. I'll go ahead and keep using that NYT page we looked at earlier. There are three basic steps to scraping a single page:
The following code executes these three steps and prints the result.
So we now have our lovely output. This output can be manipulated in various ways, or written to an output file.
Let's say you want to get a stream of news stories in an easy manner. You could visit the homepage of the NYT and work from there, or you can use an RSS feed. RSS stands for Real Simple Syndication and is, at its heart, an XML document. This allows it to be easily parsed. The fantastic library pattern
allows for easy parsing of RSS feeds. Using pattern
's Newsfeed()
method, it is possible to parse a feed and obtain attributes of the XML document. The search()
method returns an iterable composed of the individual stories. Each result has a variety of attributes such as .url
, .title
, .description
, and more. The following code demonstrates these methods.
In [ ]:
import pattern.web
url = 'http://rss.nytimes.com/services/xml/rss/nyt/World.xml'
results = pattern.web.Newsfeed().search(url, count=5)
results
print '%s \n\n %s \n\n %s \n\n' % (results[0].url, results[0].title, results[0].description)
That looks pretty good, but the description looks nastier than we would generally prefer. Luckily, pattern
provides functions to get rid of the HTML in a string.
In [ ]:
print '%s \n\n %s \n\n %s \n\n' % (results[0].url, results[0].title, pattern.web.plaintext(results[0].description))
While it's all well and good to have the title and description of a story this is often insufficient (some descriptions are just the title, which isn't particularly helpful). To get further information on the story, it is possible to combine the single-page scraping discussed previously and the results from the RSS scrape. The following code implements a function to scrape the NYT article pages, which can be done easily since the NYT is wonderfully consistent in their HTML, and then iterates over the results applying the scrape
function to each result.
In [ ]:
import codecs
outputFile = codecs.open('~/tutorialOutput.txt', encoding='utf-8', mode='a')
def scrape(url):
page = requests.get(url)
doc = lh.fromstring(page.content)
text = doc.xpath('//p[@itemprop="articleBody"]')
finalText = str()
for par in text:
finalText += par.text_content()
return finalText
for result in results:
outputText = scrape(result.url)
outputFile.write(outputText)
outputFile.close()
The final approach is for a webpage that contains information you want and the pages are spread around in a fairly consistent manner, but there is no simple, straightfoward manner in which the pages are named.
I'll offer a brief aside here to mention that it is often possible to make slight modifications to the URL of a website and obtain many different pages. For example, a website that contains Indian parliament speeches has the URL http://164.100.47.132/LssNew/psearch/Result13.aspx?dbsl=
with differing values appended after the =
. Thus, using a for-loop
allows for the programatic creation of different URLs. Some sample code is below.
In [ ]:
url = 'http://164.100.47.132/LssNew/psearch/Result13.aspx?dbsl='
for i in xrange(5175,5973):
newUrl = url + str(i)
print 'Scraping: %s' % newUrl
Getting back on topic, it is often more difficult than the above to iterate over numerous webpages within a site. This is where the Scrapy
library comes in. Scrapy
allows for the creation of web spiders that crawl over a webpage, following any links that it finds. This is often far more difficult to implement than a simple scraper since it requires the identification of rules for link following. The State Department offers a good example. I don't really have time to go into the depths of writing a Scrapy
spider, but I thought I would put up some code to illustrate what it looks like.
In [ ]:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from BeautifulSoup import BeautifulSoup
import re
import codecs
class MySpider(CrawlSpider):
name = 'statespider' #name is a name
start_urls = ['http://www.state.gov/r/pa/prs/dpb/2010/index.htm',
] #defines the URL that the spider should start on. adjust the year.
#defines the rules for the spider
rules = (Rule(SgmlLinkExtractor(allow=('/2010/'), restrict_xpaths=('//*[@id="local-nav"]'),)), #allows only links within the navigation panel that have /year/ in them.
Rule(SgmlLinkExtractor(restrict_xpaths=('//*[@id="dpb-calendar"]',), deny=('/video/')), callback='parse_item'), #follows links within the caldendar on the index page for the individuals years, while denying any links with /video/ in them
)
def parse_item(self, response):
self.log('Hi, this is an item page! %s' % response.url) #prints the response.url out in the terminal to help with debugging
#Insert code to scrape page content
#opens the file defined above and writes 'texts' using utf-8
with codecs.open(filename, 'w', encoding='utf-8') as output:
output.write(texts)
Web scraping is much, much, much, more of an art than a science. It is often non-trivial to identify the XPath selector that will get you what you want. Also, some web programmers can't seem to decide how they want to structure the pages they write, so they just change the HTML every few pages. Notice that for the NYT example if articleBody
gets changed to articleBody1
, everything breaks. There are ways around this that are often convoluted, messy, and hackish. Usually, however, where there is a will there is a way.
...brief pause to demonstrate the lengths this can go to.
As a wrap up, I thought I would show the workflow I have been using to perform real-time scraping from various news sites of stories pertaining to human atrocities. This is illustrative both of web scraping and of the issues that can accompany programming.
The general flow of the scraper is:
RSS feed -> identify relevant stories -> scrape story -> place results in mongoDB -> repeat every hour