In [10]:
import requests
import lxml.html as lh

url = 'http://www.pythonforbeginners.com/feedparser/using-feedparser-in-python'
page = requests.get(url, verify=False)
doc = lh.fromstring(page.content)
print dir(doc)

#text = doc.xpath('//p[@itemprop="articleBody"]')
#finalText = str()
#for par in text:
#    finalText += par.text_content()

#print finalText


['__class__', '__contains__', '__copy__', '__deepcopy__', '__delattr__', '__delitem__', '__dict__', '__doc__', '__format__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__len__', '__module__', '__new__', '__nonzero__', '__reduce__', '__reduce_ex__', '__repr__', '__reversed__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_init', 'addnext', 'addprevious', 'append', 'attrib', 'base', 'base_url', 'body', 'classes', 'clear', 'cssselect', 'drop_tag', 'drop_tree', 'extend', 'find', 'find_class', 'find_rel_links', 'findall', 'findtext', 'forms', 'get', 'get_element_by_id', 'getchildren', 'getiterator', 'getnext', 'getparent', 'getprevious', 'getroottree', 'head', 'index', 'insert', 'items', 'iter', 'iterancestors', 'iterchildren', 'iterdescendants', 'iterfind', 'iterlinks', 'itersiblings', 'itertext', 'keys', 'label', 'make_links_absolute', 'makeelement', 'nsmap', 'prefix', 'remove', 'replace', 'resolve_base_href', 'rewrite_links', 'set', 'sourceline', 'tag', 'tail', 'text', 'text_content', 'values', 'xpath']
/home/obestwalter/work/presentation_stack/python-course/.direnv/python-2.7.6/local/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
/home/obestwalter/work/presentation_stack/python-course/.direnv/python-2.7.6/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:791: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html
  InsecureRequestWarning)

Web Scraping

Even with the best of websites, I don’t think I’ve ever encountered a scraping job that couldn’t be described as “A small and simple general model with heaps upon piles of annoying little exceptions”

- Swizec Teller http://swizec.com/blog/scraping-with-mechanize-and-beautifulsoup/swizec/5039

How is it accomplished?

In general, there are three problems that you might face when undertaking a scraping task:

  1. You have a single page, or a set of pages, that you know of and you want to scrape.
  2. You have a source that generates links, e.g., RSS feeds, to various pages with the same structure.
  3. You have a page that contains many pages of interest that are scattered across the file system and you only have general rules for reaching these pages.

The key is that you must identify which type of problem you have. After this, you must look at the HTML structure of a webpage and construct a script that will select the parts of the page that are of interest to you.

There's a library for that! (Yea, I know...)

As mentioned previously, Python has various libraries for scraping tasks. The ones I have found the most useful are:

In addition you need some method to examine the source of a webpage in a structured manner. I use Chrome which, as a WebKit browser, allows for "Inspect Element" functionality.

So, let's look at some webpage source. I'm going to pick on the New York Times throughout (I thought about using the eventdata.psu.edu page...it actually has very well formatted HTML).

Scraping a page that you know

The easiest approach to webscraping is getting the content from a page that you know in advance. I'll go ahead and keep using that NYT page we looked at earlier. There are three basic steps to scraping a single page:

  1. Get (request) the page
  2. Parse the page content
  3. Select the content of interest using an XPath selector

The following code executes these three steps and prints the result.

So we now have our lovely output. This output can be manipulated in various ways, or written to an output file.

Let's say you want to get a stream of news stories in an easy manner. You could visit the homepage of the NYT and work from there, or you can use an RSS feed. RSS stands for Real Simple Syndication and is, at its heart, an XML document. This allows it to be easily parsed. The fantastic library pattern allows for easy parsing of RSS feeds. Using pattern's Newsfeed() method, it is possible to parse a feed and obtain attributes of the XML document. The search() method returns an iterable composed of the individual stories. Each result has a variety of attributes such as .url, .title, .description, and more. The following code demonstrates these methods.


In [ ]:
import pattern.web

url = 'http://rss.nytimes.com/services/xml/rss/nyt/World.xml'
results = pattern.web.Newsfeed().search(url, count=5)
results

print '%s \n\n %s \n\n %s \n\n' % (results[0].url, results[0].title, results[0].description)

That looks pretty good, but the description looks nastier than we would generally prefer. Luckily, pattern provides functions to get rid of the HTML in a string.


In [ ]:
print '%s \n\n %s \n\n %s \n\n' % (results[0].url, results[0].title, pattern.web.plaintext(results[0].description))

While it's all well and good to have the title and description of a story this is often insufficient (some descriptions are just the title, which isn't particularly helpful). To get further information on the story, it is possible to combine the single-page scraping discussed previously and the results from the RSS scrape. The following code implements a function to scrape the NYT article pages, which can be done easily since the NYT is wonderfully consistent in their HTML, and then iterates over the results applying the scrape function to each result.


In [ ]:
import codecs

outputFile = codecs.open('~/tutorialOutput.txt', encoding='utf-8', mode='a')

def scrape(url):
    page = requests.get(url)
    doc = lh.fromstring(page.content)
    text = doc.xpath('//p[@itemprop="articleBody"]')
    finalText = str()
    for par in text:
        finalText += par.text_content()
    return finalText

for result in results:
    outputText = scrape(result.url)
    outputFile.write(outputText)

outputFile.close()

Scraping arbitrary websites

The final approach is for a webpage that contains information you want and the pages are spread around in a fairly consistent manner, but there is no simple, straightfoward manner in which the pages are named.

I'll offer a brief aside here to mention that it is often possible to make slight modifications to the URL of a website and obtain many different pages. For example, a website that contains Indian parliament speeches has the URL http://164.100.47.132/LssNew/psearch/Result13.aspx?dbsl= with differing values appended after the =. Thus, using a for-loop allows for the programatic creation of different URLs. Some sample code is below.


In [ ]:
url = 'http://164.100.47.132/LssNew/psearch/Result13.aspx?dbsl='

for i in xrange(5175,5973):
    newUrl = url + str(i)
    print 'Scraping: %s' % newUrl

Getting back on topic, it is often more difficult than the above to iterate over numerous webpages within a site. This is where the Scrapy library comes in. Scrapy allows for the creation of web spiders that crawl over a webpage, following any links that it finds. This is often far more difficult to implement than a simple scraper since it requires the identification of rules for link following. The State Department offers a good example. I don't really have time to go into the depths of writing a Scrapy spider, but I thought I would put up some code to illustrate what it looks like.


In [ ]:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from BeautifulSoup import BeautifulSoup
import re
import codecs

class MySpider(CrawlSpider):
    name = 'statespider' #name is a name
    start_urls = ['http://www.state.gov/r/pa/prs/dpb/2010/index.htm',
    ] #defines the URL that the spider should start on. adjust the year.

        #defines the rules for the spider
    rules = (Rule(SgmlLinkExtractor(allow=('/2010/'), restrict_xpaths=('//*[@id="local-nav"]'),)), #allows only links within the navigation panel that have /year/ in them.

    Rule(SgmlLinkExtractor(restrict_xpaths=('//*[@id="dpb-calendar"]',), deny=('/video/')), callback='parse_item'), #follows links within the caldendar on the index page for the individuals years, while denying any links with /video/ in them

    )

    def parse_item(self, response):
        self.log('Hi, this is an item page! %s' % response.url) #prints the response.url out in the terminal to help with debugging
        
        #Insert code to scrape page content

        #opens the file defined above and writes 'texts' using utf-8
        with codecs.open(filename, 'w', encoding='utf-8') as output:
            output.write(texts)

The Pitfalls of Webscraping

Web scraping is much, much, much, more of an art than a science. It is often non-trivial to identify the XPath selector that will get you what you want. Also, some web programmers can't seem to decide how they want to structure the pages they write, so they just change the HTML every few pages. Notice that for the NYT example if articleBody gets changed to articleBody1, everything breaks. There are ways around this that are often convoluted, messy, and hackish. Usually, however, where there is a will there is a way.

...brief pause to demonstrate the lengths this can go to.

PITF Human Atrocities

As a wrap up, I thought I would show the workflow I have been using to perform real-time scraping from various news sites of stories pertaining to human atrocities. This is illustrative both of web scraping and of the issues that can accompany programming.

The general flow of the scraper is:

RSS feed -> identify relevant stories -> scrape story -> place results in mongoDB -> repeat every hour