In this lesson we'll learn about various techniques to scrape data from websites. This lesson will include:
BeautifulSoup
libraryWe'll be scraping information on the state senators of Illinois, as well as the list of bills from the Illinois General Assembly. Your first step before scraping should always be to read the Terms of Use or Terms of Agreement for a website. Many websites will explicitly prohibit scraping in any form. Moreover, if you're affiliated with an institution, you may be breaching existing contracts by engaging in scraping. UC Berkeley's Library recommends following this workflow:
While our source's Terms of Use do not explicitly prohibit scraping (nor do their robots.txt), it is advisable to still contact the web administrator of the website. We will not be placing too much stress on their servers today, so please keep this in mind while following along and executing the code. You should always attempt to contact the web administrator of the site you plan to scrape. Oftentimes there is an easier way to get the data that you want.
Let's go ahead and import the Python libraries we'll need:
In [ ]:
import requests # to make GET request
from bs4 import BeautifulSoup # to parse the HTML response
import time # to pause between calls
import csv # to write data to csv
import pandas # to see CSV
In [ ]:
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp')
# read the content of the server’s response as a string
page_source = response.text
print(page_source[:1000])
In [ ]:
# parse the response into an HTML tree soup object
soup = BeautifulSoup(page_source, 'html5lib')
# take a look
print(soup.prettify()[:1000])
BeautifulSoup
has a number of functions to find things on a page. Like other scraping tools, BeautifulSoup
lets you find elements by their:
Let's search first for HTML tags.
The function find_all
searches the soup
tree to find all the elements with a particular HTML tag, and returns all of those elements.
What does the example below do?
In [ ]:
soup.find_all("a")
NB: Because find_all()
is the most popular method in the BeautifulSoup
search library, you can use a shortcut for it. If you treat the BeautifulSoup
object as though it were a function, then it’s the same as calling find_all()
on that object.
In [ ]:
soup("a")
That's a lot! Many elements on a page will have the same HTML tag. For instance, if you search for everything with the a
tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes?
We can do this by adding an additional argument to the find_all
. In the example below, we are finding all the a
tags, and then filtering those with class = "sidemenu"
.
In [ ]:
# get only the 'a' tags in 'sidemenu' class
soup("a", class_="sidemenu")
Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select()
. Just pass a string into the .select()
to get all elements with that string as a valid CSS selector.
In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a
tags with class sidemenu
.
In [ ]:
# get elements with "a.sidemenu" CSS Selector.
soup.select("a.sidemenu")
Using CSS is one way to organize how we stylize a website. They allow us to categorize and label certain HTML elements, and use these categories and labels to apply specfic styling. CSS selectors are what we use to identify these elements, and then decide what style to apply. We won't have time today to go into detail about HTML and CSS, but it's worth talking about the three most important CSS selectors:
element selector: simply including the element type, such as a
above, will select all elements on the page of that element type. Try using your development tools (Chrome, Firefox, or Safari) to change all elements of the type a
to a background color of red
.
class selector: if you put a period (.
) before the name of a class, all elements belonging to that class will be selected. Try using your development tools to change all elements of the class detail
to a background color of red
.
ID selector: if you put a hashtag (#
) before the name of an id, all elements with that id will be selected. Try using the development tools to change all elements with the id Senate
to a background color of red
.
The above three examples will take all elements with the given property, but oftentimes you only want certain elements within the hierarchy. We can do that by simply placing elements side-by-side separated by a space.
Using your developer tools, change the background-color
of all a
elements in only the "Current Senate Members" table.
In [ ]:
# your code here
In [ ]:
# this is a list
soup.select("a.sidemenu")
# we first want to get an individual tag object
first_link = soup.select("a.sidemenu")[0]
# check out its class
print(type(first_link))
It's a tag! Which means it has a text
member:
In [ ]:
print(first_link.text)
You'll see there is some extra spacing here, we can use the strip
method to remove that:
In [ ]:
print(first_link.text.strip())
Sometimes we want the value of certain attributes. This is particularly relevant for a
tags, or links, where the href
attribute tells us where the link goes.
You can access a tag’s attributes by treating the tag like a dictionary:
In [ ]:
print(first_link['href'])
Nice, but that doesn't look like a full URL! Don't worry, we'll get to this soon.
In [ ]:
# your code here
In [ ]:
print(rel_paths)
Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape the 98th general assembly.
Our goal is to scrape information on each senator, including their:
In [ ]:
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
In [ ]:
# get all tr elements
rows = soup.find_all("tr")
print(len(rows))
But remember, find_all
gets all the elements with the tr
tag. We can use smart CSS selectors to get only the rows we want.
In [ ]:
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
print(rows[2].prettify())
We can use the select
method on anything. Let's say we want to find everything with the CSS selector td.detail
in an item of the list we created above.
In [ ]:
# select only those 'td' tags with class 'detail'
row = rows[2]
detail_cells = row.select('td.detail')
detail_cells
Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text
member.
In [ ]:
# Keep only the text in each of those cells
row_data = [cell.text for cell in detail_cells]
print(row_data)
Now we can combine the BeautifulSoup
tools with our basic python skills to scrape an entire web page.
In [ ]:
# check it out
print(row_data[0]) # name
print(row_data[3]) # district
print(row_data[4]) # party
In [ ]:
# make a GET request
response = requests.get('http://www.ilga.gov/senate/default.asp?GA=98')
# read the content of the server’s response
page_source = response.text
# soup it
soup = BeautifulSoup(page_source, "html5lib")
# create empty list to store our data
members = []
# returns every ‘tr tr tr’ css selector in the page
rows = soup.select('tr tr tr')
# loop through all rows
for row in rows:
# select only those 'td' tags with class 'detail'
# get rid of junk rows
# keep only the text in each of those cells
# collect information
# store in a tuple
# append to list
In [ ]:
print(len(members))
print()
print(members)
The code above retrieves information on:
We now want to retrieve the URL for each senator's list of bills. The format for the list of bills for a given senator is:
http://www.ilga.gov/senate/SenatorBills.asp + ? + GA=98 + &MemberID=memberID + &Primary=True
to get something like:
http://www.ilga.gov/senate/SenatorBills.asp?MemberID=1911&GA=98&Primary=True
You should be able to see that, unfortunately, memberID is not currently something pulled out in our scraping code.
Your initial task is to modify the code above so that we also retrieve the full URL which points to the corresponding page of primary-sponsored bills, for each member, and return it along with their name, district, and party.
Tips:
<a>
) in each legislator's row of the table. You can again use the .select()
method on the row
object in the loop to do this — similar to the command that finds all of the td.detail
cells in the row. Remember that we only want the link to the legislator's bills, not the committees or the legislator's profile page.<a href="/senate/Senator.asp/...">Bills</a>
. The string in the href
attribute contains the relative link we are after. You can access an attribute of a BeatifulSoup
Tag
object the same way you access a Python dictionary: anchor['attributeName']
. (See the documentation for more details). There are a lot of different ways to use BeautifulSoup to get things done; whatever you need to do to pull that href
out is fine.Use the code you wrote in Challenge 4 and simply add the full path to the tuple
In [ ]:
# your code here
In [ ]:
members[:5]
Cool! Now you can probably guess how to loop it all together by iterating through the links we just extracted.
Now we want to scrape the webpages corresponding to bills sponsored by each senator.
Write a function called get_bills(url)
to parse a given bill's URL. This will involve:
BeautifulSoup
library to find all of the <td>
elements with the class billlist
list
of tuples, each with:I've started the function for you. Fill in the rest.
In [ ]:
# your code here
def get_bills(url):
# make the GET request
response = requests.get(url)
page_source = response.text
soup = BeautifulSoup(page_source, "html5lib")
# get the table rows
rows = soup.select('tr tr tr')
# make empty list to collect the info
bills = []
for row in rows:
# get columns
# get text in each column
# append data
return(bills)
In [ ]:
# uncomment to test your code:
test_url = members[0][3]
print(test_url)
get_bills(test_url)[0:5]
Finally, we create a dictionary bills_dict
which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict
and calling get_bills()
for each of their associated bill URLs.
NOTE: Please call the function time.sleep(5)
for each iteration of the loop, so that we don't destroy the state's web site.
In [ ]:
bills_info = []
for member in members[:3]: # only go through 5 members
print(member[0])
member_bills = get_bills(member[3])
for b in member_bills:
bill = list(member) + list(b)
bills_info.append(bill)
time.sleep(5)
In [ ]:
bills_info
We can write this to a CSV too:
In [ ]:
# manually decide on header names
header = ['Senator', 'District', 'Party', 'Bills Link', 'Description', 'Chamber', 'Last Action', 'Last Action Date']
with open('all-bills.csv', 'w') as output_file:
csv_writer = csv.writer(output_file)
csv_writer.writerow(header)
csv_writer.writerows(bills_info)
pandas.read_csv('all-bills.csv')