In [1]:
from IPython.core.display import HTML

print("Setting custom CSS for the IPython Notebook")
styles = open('custom.css', 'r').read()
HTML(styles)


Setting custom CSS for the IPython Notebook
Out[1]:

In [2]:
## all imports
import numpy as np
import urllib2
import bs4 #this is beautiful soup

from pandas import Series
import pandas as pd
from pandas import DataFrame

import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline

import seaborn as sns
sns.set_context("talk")
sns.set_style("white")

CS109

Verena Kaynig-Fittkau

  • vkaynig@seas.harvard.edu
  • staff@cs109.org

Announcements

  • Nice page to promote your projects

http://sites.fas.harvard.edu/~huit-apps/archive/index.html

Announcements

  • HW1 solutions are online
  • Awesome TFs are trying to get the grading done this week
  • be proactive:
    • start early
    • use office hours and Piazza
    • if resources are not enough: let us know!

Announcements

  • homework submission format
    • create a folder lastname_firstinitial_hw#
    • place the notebook and any others files into that folder
    • notebooks should be executed
    • compress the folder
    • submit to iSites

Todays lecture:

  • all about data scraping
  • What is it?
  • How to do it:
    • from a website
    • with an API
  • Plus: Some more SVD!

Python data scraping

  • Why scrape the web?
    • vast source of information
    • automate tasks
    • keep up with sites
    • fun!

Can you think of examples ?

Read and Tweet!

Twitter Sentiments

L.A. Happy Hours

Python data scraping

  • copyrights and permission:
    • be careful and polite
    • give credit
    • care about media law
    • don't be evil (no spam, overloading sites, etc.)

Robots.txt

Robots.txt

  • specified by web site owner
  • gives instructions to web robots (aka your script)
  • is located at the top-level directory of the web server

http://www.example.com/robots.txt

Robots.txt

What does this one do?

User-agent: Google Disallow: User-agent: * Disallow: /

Things to consider:

  • can be just ignored
  • can be a security risk - Why?

Scraping with Python:

  • scraping is all about HTML tags
  • bad news:
    • need to learn about tags
    • websites can be ugly

HTML

  • HyperText Markup Language

  • standard for creating webpages

  • HTML tags

    • have angle brackets
    • typically come in pairs

In [3]:
s = """<!DOCTYPE html>
<html>
  <head>
    <title>This is a title</title>
  </head>
  <body>
    <h2> Test </h2>
    <p>Hello world!</p>
  </body>
</html>"""

h = HTML(s)
h


Out[3]:
This is a title

Test

Hello world!

Useful Tags

  • heading <h1></h1> ... <h6></h6>

  • paragraph <p></p>

  • line break <br>

  • link with attribute

<a href="http://www.example.com/">An example link</a>

Scraping with Python:

  • example of a beautifully simple webpage:

http://www.crummy.com/software/BeautifulSoup

Scraping with Python:

  • good news:
    • some browsers help
    • look for: inspect element
    • need only basic html

Try 'Ctrl-Shift I' in Chrome

Try 'Command-Option I' in Chrome

For Safari:

  • from your Safari menu bar
  • click Safari > Preferences
  • then select the Advanced tab.

  • select: Show Develop menu in menu bar

  • click Develop from the menu bar

  • click Show Web Inspector

Scraping with Python

  • different useful libraries:
    • urllib
    • beautifulsoup
    • pattern
    • LXML
    • ...

In [4]:
url = 'http://www.crummy.com/software/BeautifulSoup'
source = urllib2.urlopen(url).read()
print source


<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"
"http://www.w3.org/TR/REC-html40/transitional.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Beautiful Soup: We called him Tortoise because he taught us.</title>
<link rev="made" href="mailto:leonardr@segfault.org">
<link rel="stylesheet" type="text/css" href="/nb/themes/Default/nb.css">
<meta name="Description" content="Beautiful Soup: a library designed for screen-scraping HTML and XML.">
<meta name="generator" content="Markov Approximation 1.4 (module: leonardr)">
<meta name="author" content="Leonard Richardson">
</head>
<body bgcolor="white" text="black" link="blue" vlink="660066" alink="red">
<img align="right" src="10.1.jpg" width="250"><br />

<p>You didn't write that awful page. You're just trying to get some
data out of it. Beautiful Soup is here to help. Since 2004, it's been
saving programmers hours or days of work on quick-turnaround
screen scraping projects.</p>

<div align="center">

<a href="bs4/download/"><h1>Beautiful Soup</h1></a>

<p>"A tremendous boon." -- <a
href="http://www.awaretek.com/python/index.html">Python411
Podcast</a></p>

<p>[ <a href="#Download">Download</a> | <a
href="bs4/doc/">Documentation</a> | <a href="#HallOfFame">Hall of Fame</a> | <a href="https://code.launchpad.net/beautifulsoup">Source</a> | <a href="https://groups.google.com/forum/?fromgroups#!forum/beautifulsoup">Discussion group</a> ]</p>

<small>If Beautiful Soup has saved you a lot of time and money, the
best way to pay me back is to check out <a
href="http://www.candlemarkandgleam.com/shop/constellation-games/"><i>Constellation
Games</i>, my sci-fi novel about alien video games</a>.<br />You can
<a
href="http://constellation.crummy.com/Constellation%20Games%20excerpt.html">read
the first two chapters for free</a>, and the full novel starts at 5
USD. Thanks!</small> </div>

<p><i>If you have questions, send them to <a
href="https://groups.google.com/forum/?fromgroups#!forum/beautifulsoup">the discussion
group</a>. If you find a bug, <a href="https://bugs.launchpad.net/beautifulsoup/">file it</a>.</i></p>

<p>Beautiful Soup is a Python library designed for quick turnaround
projects like screen-scraping. Three features make it powerful:

<ol>

<li>Beautiful Soup provides a few simple methods and Pythonic idioms
for navigating, searching, and modifying a parse tree: a toolkit for
dissecting a document and extracting what you need. It doesn't take
much code to write an application

<li>Beautiful Soup automatically converts incoming documents to
Unicode and outgoing documents to UTF-8. You don't have to think
about encodings, unless the document doesn't specify an encoding and
Beautiful Soup can't detect one. Then you just have to specify the
original encoding.

<li>Beautiful Soup sits on top of popular Python parsers like <a
href="http://lxml.de/">lxml</a> and <a
href="http://code.google.com/p/html5lib/">html5lib</a>, allowing you
to try out different parsing strategies or trade speed for
flexibility.

</ol>

<p>Beautiful Soup parses anything you give it, and does the tree
traversal stuff for you. You can tell it "Find all the links", or
"Find all the links of class <tt>externalLink</tt>", or "Find all the
links whose urls match "foo.com", or "Find the table heading that's
got bold text, then give me that text."

<p>Valuable data that was once locked up in poorly-designed websites
is now within your reach. Projects that would have taken hours take
only minutes with Beautiful Soup.

<p>Interested? <a href="bs4/doc/">Read more.</a>

<a name="Download"><h2>Download Beautiful Soup</h2></a>

<p>The current release is <a href="bs4/download/">Beautiful Soup
4.3.2</a> (October 2, 2013). You can install it with <code>pip install
beautifulsoup4</code> or <code>easy_install
beautifulsoup4</code>. It's also available as the
<code>python-beautifulsoup4</code> package in recent versions of
Debian, Ubuntu, and Fedora .

<p>Beautiful Soup 4 works on both Python 2 (2.6+) and Python 3.

<p>Beautiful Soup is licensed under the MIT license, so you can also
download the tarball, drop the <code>bs4/</code> directory into almost
any Python application (or into your library path) and start using it
immediately. (If you want to do this under Python 3, you will need to
manually convert the code using <code>2to3</code>.)

<h3>Beautiful Soup 3</h3>

<p>Beautiful Soup 3 was the official release line of Beautiful Soup
from May 2006 to March 2012. It is considered stable, and only
critical bugs will be fixed. <a
href="http://www.crummy.com/software/BeautifulSoup/bs3/documentation.html">Here's
the Beautiful Soup 3 documentation.</a>

<p>Beautiful Soup 3 works only under Python 2.x. It is licensed under
the same license as Python itself.

<p>The current release of Beautiful Soup 3 is <a
href="download/3.x/BeautifulSoup-3.2.1.tar.gz">3.2.1</a> (February 16,
2012). You can install Beautiful Soup 3 with <code>pip install
BeautifulSoup</code> or <code>easy_install BeautifulSoup</code>. It's
also available as <code>python-beautifulsoup</code> in Debian and
Ubuntu, and as <code>python-BeautifulSoup</code> in Fedora.

<p>You can also download the tarball and use
<code>BeautifulSoup.py</code> in your project directly.


<a name="HallOfFame"><h2>Hall of Fame</h2></a>

<p>Over the years, Beautiful Soup has been used in hundreds of
different projects. There's no way I can list them all, but I do want
to highlight a few high-profile projects. Beautiful Soup isn't what
makes these projects interesting, but it did make their completion easier:

<ul>

<li><a
 href="http://www.nytimes.com/2007/10/25/arts/design/25vide.html">"Movable
 Type"</a>, a work of digital art on display in the lobby of the New
 York Times building, uses Beautiful Soup to scrape news feeds.

<li>Reddit uses Beautiful Soup to <a
href="https://github.com/reddit/reddit/blob/85f9cff3e2ab9bb8f19b96acd8da4ebacc079f04/r2/r2/lib/media.py">parse
a page that's been linked to and find a representative image</a>.

<li>Alexander Harrowell uses Beautiful Soup to <a
 href="http://www.harrowell.org.uk/viktormap.html">track the business
 activities</a> of an arms merchant.

<li>The developers of Python itself used Beautiful Soup to <a
href="http://svn.python.org/view/tracker/importer/">migrate the Python
bug tracker from Sourceforge to Roundup</a>.

<li>The <a href="http://www2.ljworld.com/">Lawrence Journal-World</a>
uses Beautiful Soup to <A
href="http://www.b-list.org/weblog/2010/nov/02/news-done-broke/">gather
statewide election results</a>.

<li>The <a href="http://esrl.noaa.gov/gsd/fab/">NOAA's Forecast
Applications Branch</a> uses Beautiful Soup in <a
href="http://laps.noaa.gov/topograbber/">TopoGrabber</a>, a script for
downloading "high resolution USGS datasets."

</ul>

<p>If you've used Beautiful Soup in a project you'd like me to know
about, please do send email to me or <a
href="http://groups.google.com/group/beautifulsoup/">the discussion
group</a>.

<h2>Development</h2>

<p>Development happens at <a
href="https://launchpad.net/beautifulsoup">Launchpad</a>. You can <a
href="https://code.launchpad.net/beautifulsoup/">get the source
code</a> or <a href="https://bugs.launchpad.net/beautifulsoup/">file
bugs</a>.<hr><table><tr><td valign="top">
<p>This document (<a href="/source/software/BeautifulSoup/index.bhtml">source</a>) is part of Crummy, the webspace of <a href="/self/">Leonard Richardson</a> (<a href="/self/contact.html">contact information</a>). It was last modified on Tuesday, May 27 2014, 10:05:41 Nowhere Standard Time and last built on Tuesday, September 23 2014, 17:00:04 Nowhere Standard Time.</p><p><table class="licenseText"><tr><td><a href="http://creativecommons.org/licenses/by-sa/2.0/"><img border="0" src="/nb//resources/img/somerights20.jpg"></a></td><td valign="top">Crummy is &copy; 1996-2014 Leonard Richardson. Unless otherwise noted, all text licensed under a <a href="http://creativecommons.org/licenses/by-sa/2.0/">Creative Commons License</a>.</td></tr></table></span><!--<rdf:RDF xmlns="http://web.resource.org/cc/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"><Work rdf:about="http://www.crummy.com/"><dc:title>Crummy: The Site</dc:title><dc:rights><Agent><dc:title>Crummy: the Site</dc:title></Agent></dc:rights><dc:format>text/html</dc:format><license rdf:resource=http://creativecommons.org/licenses/by-sa/2.0//></Work><License rdf:about="http://creativecommons.org/licenses/by-sa/2.0/"></License></rdf:RDF>--></p></td><td valign=top><p><b>Document tree:</b>
<dl><dd><a href="http://www.crummy.com/">http://www.crummy.com/</a><dl><dd><a href="http://www.crummy.com/software/">software/</a><dl><dd><a href="http://www.crummy.com/software/BeautifulSoup/">BeautifulSoup/</a></dl>
</dl>
</dl>


Site Search:

<form method="get" action="/search/">
        <input type="text" name="q" maxlength="255" value=""></input>
        </form>
        </td>

</tr>

</table>
</body>
</html>

Quiz :

  • Is the word 'Alice' mentioned on the beautiful soup homepage?
  • How often does the word 'Soup' occur on the site?
    • hint: use .count()
  • At what index occurs the substring 'alien video games' ?
    • hint: use .find()

In [5]:
## is 'Alice' in source?

## count occurences of 'Soup'

## find index of 'alien video games'

Beautiful Soup

  • designed to make your life easier
  • many good functions for parsing html code

Some examples


In [6]:
## get bs4 object
soup = bs4.BeautifulSoup(source)

## show prettify()
## show how to find all a tags

## ***Why does this not work? ***
#soup.findAll('Soup')

Some examples


In [7]:
## get attribute value from an element:
## find tag
## get attribute

##get all links in the page

##filter all external links

Parsing the Tree


In [8]:
s = """<!DOCTYPE html><html><head><title>This is a title</title></head><body><h3> Test </h3><p>Hello world!</p></body></html>"""
## get bs4 object
tree = bs4.BeautifulSoup(s)
## get html root node
## get head from root using contents
## get body from root

## could directly access body

Quiz:

  • Find the h3 tag by parsing the tree starting at body
  • Create a list of all Hall of Fame entries listed on the Beautiful Soup webpage
    • hint: it is the only unordered list in the page (tag ul)

In [9]:
## get h3 tag from body

In [10]:
## use ul as entry point
## get hall of fame list from entry point

## reformat into a list

## maybe show some advanced python

Advanced Example

Designed by Katharine Jarmul

https://github.com/kjam/python-web-scraping-tutorial

Scraping Happy Hours

Scrape the happy hour list of LA for personal preferences

http://www.downtownla.com/3_10_happyHours.asp?action=ALL


In [11]:
stuff_i_like = ['burger', 'sushi', 'sweet potato fries', 'BBQ']
found_happy_hours = []
my_happy_hours = []
# First, I'm going to identify the areas of the page I want to look at
url = 'http://www.downtownla.com/3_10_happyHours.asp?action=ALL'
source = urllib2.urlopen(url).read()
tables = bs4.BeautifulSoup(source)

In [12]:
# Then, I'm going to sort out the *exact* parts of the page
# that match what I'm looking for...
for t in tables.findAll('p', {'class': 'calendar_EventTitle'}):
    text = t.text
    for s in t.findNextSiblings():
        text += '\n' + s.text
    found_happy_hours.append(text)

print "The scraper found %d happy hours!" % len(found_happy_hours)


The scraper found 66 happy hours!

In [13]:
# Now I'm going to loop through the food I like
# and see if any of the happy hour descriptions match
for food in stuff_i_like:
    for hh in found_happy_hours:
        # checking for text AND making sure I don't have duplicates
        if food in hh and hh not in my_happy_hours:
            print "YAY! I found some %s!" % food
            my_happy_hours.append(hh)

print "I think you might like %d of them, yipeeeee!" % len(my_happy_hours)


YAY! I found some burger!
YAY! I found some sushi!
YAY! I found some sushi!
YAY! I found some sushi!
I think you might like 4 of them, yipeeeee!

In [14]:
# Now, let's make a mail message we can read:
message = 'Hey Katharine,\n\n\n'
message += 'OMG, I found some stuff for you in Downtown, take a look.\n\n'
message += '==============================\n'.join(my_happy_hours)
message = message.encode('utf-8')
# To read more about encoding:
# http://diveintopython.org/xml_processing/unicode.html
message = message.replace('\t', '').replace('\r', '')
message += '\n\nXOXO,\n Your Py Script'

#print message

Getting Data with an API

  • API: application programming interface
  • some sites try to make your life easier
  • Twitter, New York Times, ImDB, rotten Tomatoes, Yelp, ...

API keys

  • required for data access
  • identifies application (you)
  • monitors usage
  • limits rates

In [15]:
import json
import secret 
import requests

api_key = secret.rottenTomatoes_key()

url = 'http://api.rottentomatoes.com/api/public/v1.0/lists/dvds/top_rentals.json?apikey=' + api_key
data = urllib2.urlopen(url).read()
#print data

Python Dictonaries

  • build in data type
  • uses key: value pairs

In [16]:
a = {'a': 1, 'b':2}
print a

#show keys

#show values

#show for loop over all entries
#explicit, zipped, iteritems


{'a': 1, 'b': 2}

JSON

  • JavaScript Object Notation
  • human readable
  • transmit attribute-value pairs

In [17]:
a = {'a': 1, 'b':2}
s = json.dumps(a)
a2 = json.loads(s)

In [18]:
#create dictionary from JSON 
dataDict = json.loads(data)
#expore dictionary

#filter movies

#find critics score

Quiz

  • build a list with critics scores
  • build a list with audience scores

In [19]:
# critics scores list

# audience scores list

In [20]:
# create pandas data frame with critics and audience score
# first create numpy array
# then create DataFrame with data and columns
# also create a list with all movie titles
# set index of dataFrame BEWARE of inplace!

In [21]:
# create a bar plot with the data

# set the title to Score Comparison

# set the x label

# set the y label

# show the plot

Twitter Example:

  • API a bit more complicated
  • libraries make life easier
  • python-twitter

https://github.com/bear/python-twitter


In [22]:
import twitter

# define the necessary keys
cKey = secret.twitterAPI_key()
cSecret = secret.twitterAPI_secret()
aKey = secret.twitterAPI_access_token_key()
aSecret = secret.twitterAPI_access_token_secret()

# create the api object
api = twitter.Api(consumer_key=cKey, consumer_secret=cSecret, access_token_key=aKey, access_token_secret=aSecret)

In [23]:
# get the user timeline with screen_name = 'rafalab'

# create a data frame
# first get a list of panda Series or dict

# then create the data frame

In [24]:
# filter tweets with enough retweet_count

# print the text of these tweets

Extracting columns:

Warning: The returned column is a view on the data

  • it is not a copy
  • you change the Series => you change the DataFrame

In [25]:
# create a view for favorite_count on maybe_interesting

# change a value

# look at original frame
# do it again but this time with copy

Singular Value Decomposition

  • remember Rafael's nice illustration last week

  • some more python details

http://cs109.github.io/2014/pages/lectures/04-distance.html#/11


In [26]:
import scipy
np.random.seed(seed=99)

# make some data up
mean = [0,0]
cov = [[1.0,0.7],[0.7,1.0]] 

x,y = np.random.multivariate_normal(mean,cov,500).T

In [27]:
# plot the data
fig = plt.figure()
plt.scatter(x,y)
plt.axis('equal')
plt.show()



In [28]:
# create a data matrix
matrix = np.column_stack((x,y))
# compute SVD
U,s,Vh = scipy.linalg.svd(matrix)
# blow up s
S = scipy.linalg.diagsvd(s, 500, 2)
# reconstruct the data (sanity test)
reconstruction = np.dot(U, np.dot(S, Vh))

print matrix[1,:]
print reconstruction[1,:]
print np.allclose(matrix, reconstruction)


[-0.77618857  0.25387936]
[-0.77618857  0.25387936]
True

In [29]:
# show the column vectors of V
V = Vh.T
plt.scatter(x, y)
plt.plot([0, V[0,0]], [0,V[1,0]], c='r', linewidth=10.0)
plt.plot([0, V[0,1]], [0,V[1,1]], c='y', linewidth=10.0)
plt.axis('equal')
plt.show()



In [30]:
# two ways to project the data
projection = np.dot(U, S[:,:1])
projection2 = np.dot(matrix, V[:,:1])
np.allclose(projection, projection2)


Out[30]:
True

In [31]:
# compare the plots
plt.clf()
zeros = np.zeros_like(projection)
plt.scatter(projection, zeros, c='r', zorder=2)
plt.scatter(x,y,c='b', zorder=2)

for px, py, proj in zip(x,y,projection):
    plt.plot([px,proj],[py,0],c='0.5', linewidth=1.0, zorder=1)
    
plt.axis('equal')
plt.show()



In [32]:
# try to reconstruct back to 2D
# just a reminder
projection = np.dot(U, S[:,:1])
# now the reconstruction
reconstruction = np.dot(projection, Vh[:1,:])
reconstruction.shape


Out[32]:
(500L, 2L)

In [33]:
# compare the plots
plt.clf()
zeros = np.zeros_like(projection)
plt.scatter(reconstruction[:,0], reconstruction[:,1], c='r', zorder=2)
plt.scatter(x,y,c='b', zorder=2)

for px, py, rx,ry in zip(x,y,reconstruction[:,0], 
                         reconstruction[:,1]):
    plt.plot([px,rx],[py,ry],c='0.5', linewidth=1.0, zorder=1)
    
plt.axis('equal')
plt.show()


Eigenfaces

  • image patches contain faces
  • each image is a data point
  • each pixel is a dimension

Reconstructions

Face Detection