Before working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to Preview the Grading for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.
This assignment requires that you to find at least two datasets on the web which are related, and that you visualize these datasets to answer a question with the broad topic of sports or athletics (see below) for the region of Ann Arbor, Michigan, United States, or United States more broadly.
You can merge these datasets with data from different regions if you like! For instance, you might want to compare Ann Arbor, Michigan, United States to Ann Arbor, USA. In that case at least one source file must be about Ann Arbor, Michigan, United States.
You are welcome to choose datasets at your discretion, but keep in mind they will be shared with your peers, so choose appropriate datasets. Sensitive, confidential, illicit, and proprietary materials are not good choices for datasets for this assignment. You are welcome to upload datasets of your own as well, and link to them using a third party repository such as github, bitbucket, pastebin, etc. Please be aware of the Coursera terms of service with respect to intellectual property.
Also, you are welcome to preserve data in its original language, but for the purposes of grading you should provide english translations. You are welcome to provide multiple visuals in different languages if you would like!
As this assignment is for the whole course, you must incorporate principles discussed in the first week, such as having as high data-ink ratio (Tufte) and aligning with Cairo’s principles of truth, beauty, function, and insight.
Here are the assignment instructions:
What do we mean by sports or athletics? For this category we are interested in sporting events or athletics broadly, please feel free to creatively interpret the category when building your research question!
Looking for an example? Here's what our course assistant put together for the Ann Arbor, MI, USA area using sports and athletics as the topic. Example Solution File
In [1]:
!conda install html5lib -y
!conda install beautifulsoup4 -y
!conda install lxml -y
!conda install urllib2 -y
In [2]:
import urllib
import urllib.request
from bs4 import BeautifulSoup
import pandas as pd
soup =BeautifulSoup(urllib.request.urlopen('https://en.wikipedia.org/wiki/2008_Detroit_Lions_season').read())
table = soup.find_all("table", { "class" : "wikitable" })[2]
print(table)
In [3]:
n_columns = 0
n_rows=0
column_names = []
# Find number of rows and columns
# we also find the column titles if we can
for row in table.find_all('tr'):
# Determine the number of rows in the table
td_tags = row.find_all('td')
if len(td_tags) > 0:
n_rows+=1
if n_columns == 0:
# Set the number of columns for our table
n_columns = len(td_tags)
# Handle column names if we find them
th_tags = row.find_all('th')
if len(th_tags) > 0 and len(column_names) == 0:
for th in th_tags:
column_names.append(th.get_text())
# Safeguard on Column Titles
if len(column_names) > 0 and len(column_names) != n_columns:
raise Exception("Column titles do not match the number of columns")
columns = column_names if len(column_names) > 0 else range(0,n_columns)
lions08df = pd.DataFrame(columns = columns, index= range(0,n_rows))
row_marker = 0
for row in table.find_all('tr'):
column_marker = 0
columns = row.find_all('td')
for column in columns:
lions08df.iat[row_marker,column_marker] = column.get_text()
column_marker += 1
if len(columns) > 0:
row_marker += 1
In [4]:
lions08df
Out[4]:
In [5]:
soup =BeautifulSoup(urllib.request.urlopen('https://en.wikipedia.org/wiki/2009_St._Louis_Rams_season').read())
table = soup.find_all("table", { "class" : "wikitable" })[1]
print(table)
In [6]:
n_columns = 0
n_rows=0
column_names = []
# Find number of rows and columns
# we also find the column titles if we can
for row in table.find_all('tr'):
# Determine the number of rows in the table
td_tags = row.find_all('td')
if len(td_tags) > 0:
n_rows+=1
if n_columns == 0:
# Set the number of columns for our table
n_columns = len(td_tags)
# Handle column names if we find them
th_tags = row.find_all('th')
if len(th_tags) > 0 and len(column_names) == 0:
for th in th_tags:
column_names.append(th.get_text())
# Safeguard on Column Titles
if len(column_names) > 0 and len(column_names) != n_columns:
raise Exception("Column titles do not match the number of columns")
columns = column_names if len(column_names) > 0 else range(0,n_columns)
Rams09df = pd.DataFrame(columns = columns, index= range(0,n_rows))
row_marker = 0
for row in table.find_all('tr'):
column_marker = 0
columns = row.find_all('td')
for column in columns:
Rams09df.iat[row_marker,column_marker] = column.get_text()
column_marker += 1
if len(columns) > 0:
row_marker += 1
In [7]:
Rams09df
Out[7]:
In [8]:
soup =BeautifulSoup(urllib.request.urlopen('https://en.wikipedia.org/wiki/2000_San_Diego_Chargers_season').read())
table = soup.find_all("table", { "class" : "wikitable" })[0]
print(table)
In [9]:
n_columns = 0
n_rows=0
column_names = []
# Find number of rows and columns
# we also find the column titles if we can
for row in table.find_all('tr'):
# Determine the number of rows in the table
td_tags = row.find_all('td')
if len(td_tags) > 0:
n_rows+=1
if n_columns == 0:
# Set the number of columns for our table
n_columns = len(td_tags)
# Handle column names if we find them
th_tags = row.find_all('th')
if len(th_tags) > 0 and len(column_names) == 0:
for th in th_tags:
column_names.append(th.get_text())
# Safeguard on Column Titles
if len(column_names) > 0 and len(column_names) != n_columns:
raise Exception("Column titles do not match the number of columns")
columns = column_names if len(column_names) > 0 else range(0,n_columns)
Carg00df = pd.DataFrame(columns = columns, index= range(0,n_rows))
row_marker = 0
for row in table.find_all('tr'):
column_marker = 0
columns = row.find_all('td')
for column in columns:
Carg00df.iat[row_marker,column_marker] = column.get_text()
column_marker += 1
if len(columns) > 0:
row_marker += 1
In [10]:
Carg00df
Out[10]:
In [11]:
Carg00df['pointsfor'] = Carg00df['Result'].str.extract('(\d+)', expand=True)
Carg00df['pointsAgainst'] = Carg00df['Result'].str.extract('(\d+$)', expand=True)
Rams09df['pointsfor'] = Rams09df['Opponent'].str.extract('(\d+)', expand=True)
Rams09df['Opponent'] = Rams09df['Opponent'].str.strip("(\s\([OT]\)$)")
Rams09df['pointsAgainst'] = Rams09df['Opponent'].str.extract('(\d+$)', expand=True)
lions08df['pointsfor'] = lions08df['Opponent'].str.extract('(\d+)', expand=True)
lions08df['pointsAgainst'] = lions08df['Opponent'].str.extract('(\d+$)', expand=True)
In [12]:
Rams09df['pointsAgainst'][5] = 23
In [13]:
lions08df.dropna(inplace=True)
Rams09df.dropna(inplace=True)
Carg00df.dropna(inplace=True)
In [14]:
Carg00df['pointsfor'] = Carg00df['pointsfor'].astype(int)
Carg00df['pointsAgainst'] = Carg00df['pointsAgainst'].astype(int)
Rams09df['pointsfor'] = Rams09df['pointsfor'].astype(int)
Rams09df['pointsAgainst'] = Rams09df['pointsAgainst'].astype(int)
lions08df['pointsfor'] = lions08df['pointsfor'].astype(int)
lions08df['pointsAgainst'] = lions08df['pointsAgainst'].astype(int)
In [15]:
Carg00df['NetPoints'] = Carg00df['pointsfor'] - Carg00df['pointsAgainst']
Rams09df['NetPoints'] = Rams09df['pointsfor'] - Rams09df['pointsAgainst']
lions08df['NetPoints'] = lions08df['pointsfor'] - lions08df['pointsAgainst']
In [16]:
Carg00df['Average'] = Carg00df['NetPoints'].mean()
Rams09df['Average'] = Rams09df['NetPoints'].mean()
lions08df['Average'] = lions08df['NetPoints'].mean()
In [17]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib notebook
In [49]:
plt.figure()
#plt.subplot(axisbg='lavenderblush', alpha=0.9)
plt.plot(lions08df['NetPoints'], label="Lions 2008", alpha=0.2, color='b', ls='--')
plt.plot(Rams09df['NetPoints'], label="Rams 2009", alpha=0.3, color='gold', ls='--')
plt.plot(Carg00df['NetPoints'], label="Panthers 2000", alpha=0.2, color='c', ls='--')
plt.plot(lions08df['Average'], label="Average Lions 2008", color='b', alpha=0.8)
plt.plot(Rams09df['Average'], label="Average Rams 2009", color='gold', alpha=0.8)
plt.plot(Carg00df['Average'], label="Average Panthers 2000", color='c', alpha=0.8)
plt.legend()
plt.xlim((0,16))
plt.ylim((-40,5))
plt.xlabel('Week')
plt.ylabel('Net Point Differential')
plt.title('Comparison of the Winless Lions Against single Win Teams')
plt.grid(b=False)