Lecture 01 : intro, inputs, numpy, pandas

1. Inputs: CSV / Text

We will start by ingesting plain text.


In [1]:
from __future__ import print_function
import csv

In [2]:
my_reader = csv.DictReader(open('data/eu_revolving_loans.csv', 'r'))

DicReader returns a "generator" -- which means that we only have 1 chance to read the returning row dictionaries.

Let's just print out line by line to see what we are reading in:


In [ ]:
for line in my_reader:
    print(line)

Since the data is tabular format, pandas is ideally suited for such data. There are convenient pandas import functions for reading in tabular data.

Pandas provides direct csv ingestion into "data frames":


In [ ]:
import pandas as pd
df = pd.read_csv('data/eu_revolving_loans.csv')
df.head()

As we briefly discussed last week, simply reading in without any configuration generates a fairly message data frame. We should try to specify some helping hints to pandas as to where the header rows are and which is the index colum:


In [ ]:
df = pd.read_csv('data/eu_revolving_loans.csv', header=[1,2,4], index_col=0)
df.head()

2. Inputs: Excel

Many organizations still use Excel as the common medium for communicating data and analysis. We will look quickly at how to ingest Excel data. There are many packages available to read Excel files. We will use one popular one here.


In [1]:
from __future__ import print_function
from openpyxl import load_workbook

Let's take a look at the excel file that want to read into Jupyter


In [ ]:
!open 'data/climate_change_download_0.xlsx'

Here is how we can read the Excel file into the Jupyter environment.


In [2]:
wb = load_workbook(filename='data/climate_change_download_0.xlsx')

What are the "sheets" in this workbook?


In [ ]:
wb.get_sheet_names()`

We will focus on the sheet 'Data':


In [ ]:
ws = wb.get_sheet_by_name('Data')

For the sheet "Data", let's print out the content cell-by-cell to view the content.


In [ ]:
for row in ws.rows:
    for cell in row:
        print(cell.value)

Pandas also provides direct Excel data ingest:


In [ ]:
import pandas as pd
df = pd.read_excel('data/climate_change_download_0.xlsx')
df.head()

Here is another example with multiple sheets:


In [5]:
df = pd.read_excel('data/GHE_DALY_Global_2000_2012.xls', sheetname='Global2012', header=[4,5])

This dataframe has a "multi-level" index:


In [ ]:
df.columns

How do we export a dataframe back to Excel?


In [9]:
df.to_excel('data/my_excel.xlsx')

In [10]:
!open 'data/my_excel.xlsx'

3. Inputs: PDF

PDF is also a common communication medium about data and analysis. Let's look at how one can read data from PDF into Python.


In [8]:
import pdftables

my_pdf = open('data/WEF_GlobalCompetitivenessReport_2014-15.pdf', 'rb')
chart_page = pdftables.get_pdf_page(my_pdf, 29)

PDF is a proprietary file format with specific tagging that has been reverse engineered. Let's take a look at some structures in this file.


In [ ]:
table = pdftables.page_to_tables(chart_page)
titles = zip(table[0][0], table[0][1])[:5]
titles = [''.join([title[0], title[1]]) for title in titles]
print(titles)

There is a table with structured data that we can peel out:


In [ ]:
all_rows = []
for row_data in table[0][2:]:
    all_rows.extend([row_data[:5], row_data[5:]])

print(all_rows)

4. Configurations


In [ ]:
from ConfigParser import ConfigParser
config = ConfigParser()
config.read('../cfg/sample.cfg')

In [ ]:
config.sections()

5. APIs

Getting Twitter data from API

Relevant links to the exercise here:

Create an authentication handler


In [ ]:
import tweepy
auth = tweepy.OAuthHandler(config.get('twitter', 'consumer_key'), config.get('twitter', 'consumer_secret'))
auth.set_access_token(config.get('twitter','access_token'), config.get('twitter','access_token_secret'))
auth

Create an API endpoint


In [17]:
api = tweepy.API(auth)

Try REST-ful API call to Twitter


In [22]:
python_tweets = api.search('turkey')

In [ ]:
for tweet in python_tweets:
    print(tweet.text)

For streaming API call, we should run a standalone python program: tweetering.py

Input & Output to OpenWeatherMap API

Relevant links to the exercise here:

API call:

api.openweathermap.org/data/2.5/weather?q={city name}

api.openweathermap.org/data/2.5/weather?q={city name},{country code}

Parameters:

q city name and country code divided by comma, use ISO 3166 country codes

Examples of API calls:

api.openweathermap.org/data/2.5/weather?q=London

api.openweathermap.org/data/2.5/weather?q=London,uk

In [18]:
from pprint import pprint
import requests
weather_key = config.get('openweathermap', 'api_key')
res = requests.get("http://api.openweathermap.org/data/2.5/weather",
                  params={"q": "San Francisco", "appid": weather_key, "units": "metric"})

In [ ]:
pprint(res.json())

6. Python requests

"requests" is a wonderful HTTP library for Python, with the right level of abstraction to avoid lots of tedious plumbing (manually add query strings to your URLs, or to form-encode your POST data). Keep-alive and HTTP connection pooling are 100% automatic, powered by urllib3, which is embedded within Requests)

>>> r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
>>> r.status_code
200
>>> r.headers['content-type']
'application/json; charset=utf8'
>>> r.encoding
'utf-8'
>>> r.text
u'{"type":"User"...'
>>> r.json()
{u'private_gists': 419, u'total_private_repos': 77, ...}

There is a lot of great documentation at the python-requests site -- we are extracting selected highlights from there for your convenience here.

Making a request

Making a request with Requests is very simple.

Begin by importing the Requests module:


In [27]:
import requests

Now, let's try to get a webpage. For this example, let's get GitHub's public timeline


In [28]:
r = requests.get('https://api.github.com/events')

Now, we have a Response object called r. We can get all the information we need from this object.

Requests' simple API means that all forms of HTTP request are as obvious. For example, this is how you make an HTTP POST request:


In [29]:
r = requests.post('http://httpbin.org/post', data = {'key':'value'})

What about the other HTTP request types: PUT, DELETE, HEAD and OPTIONS? These are all just as simple:


In [30]:
r = requests.put('http://httpbin.org/put', data = {'key':'value'})
r = requests.delete('http://httpbin.org/delete')
r = requests.head('http://httpbin.org/get')
r = requests.options('http://httpbin.org/get')

Passing Parameters In URLs

You often want to send some sort of data in the URL's query string. If you were constructing the URL by hand, this data would be given as key/value pairs in the URL after a question mark, e.g. httpbin.org/get?key=val. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. As an example, if you wanted to pass key1=value1 and key2=value2 to httpbin.org/get, you would use the following code:


In [31]:
payload = {'key1': 'value1', 'key2': 'value2'}
r = requests.get('http://httpbin.org/get', params=payload)

You can see that the URL has been correctly encoded by printing the URL:


In [ ]:
print(r.url)

Note that any dictionary key whose value is None will not be added to the URL's query string.

You can also pass a list of items as a value:


In [ ]:
payload = {'key1': 'value1', 'key2': ['value2', 'value3']}

r = requests.get('http://httpbin.org/get', params=payload)
print(r.url)

Response Content

We can read the content of the server's response. Consider the GitHub timeline again:


In [ ]:
import requests

r = requests.get('https://api.github.com/events')
r.text

Requests will automatically decode content from the server. Most unicode charsets are seamlessly decoded.

When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property:


In [ ]:
r.encoding

In [37]:
r.encoding = 'ISO-8859-1'

If you change the encoding, Requests will use the new value of r.encoding whenever you call r.text. You might want to do this in any situation where you can apply special logic to work out what the encoding of the content will be. For example, HTTP and XML have the ability to specify their encoding in their body. In situations like this, you should use r.content to find the encoding, and then set r.encoding. This will let you use r.text with the correct encoding.

Requests will also use custom encodings in the event that you need them. If you have created your own encoding and registered it with the codecs module, you can simply use the codec name as the value of r.encoding and Requests will handle the decoding for you.

JSON Response Content

There's also a builtin JSON decoder, in case you're dealing with JSON data:


In [ ]:
import requests

r = requests.get('https://api.github.com/events')
r.json()

In case the JSON decoding fails, r.json raises an exception. For example, if the response gets a 204 (No Content), or if the response contains invalid JSON, attempting r.json raises ValueError: No JSON object could be decoded.

It should be noted that the success of the call to r.json does not indicate the success of the response. Some servers may return a JSON object in a failed response (e.g. error details with HTTP 500). Such JSON will be decoded and returned. To check that a request is successful, use r.raise_for_status() or check r.status_code is what you expect.


In [ ]:
r.status_code

Custom Headers

If you'd like to add HTTP headers to a request, simply pass in a dict to the headers parameter.

For example, we didn't specify our user-agent in the previous example:


In [41]:
url = 'https://api.github.com/some/endpoint'
headers = {'user-agent': 'my-app/0.0.1'}

r = requests.get(url, headers=headers)

Note: Custom headers are given less precedence than more specific sources of information. For instance:

  • Authorization headers set with headers= will be overridden if credentials are specified in .netrc, which in turn will be overridden by the auth= parameter.
  • Authorization headers will be removed if you get redirected off-host.
  • Proxy-Authorization headers will be overridden by proxy credentials provided in the URL.
  • Content-Length headers will be overridden when we can determine the length of the content.

Response Headers

We can view the server's response headers using a Python dictionary:


In [ ]:
r.headers

The dictionary is special, though: it's made just for HTTP headers. According to RFC 7230, HTTP Header names are case-insensitive.

So, we can access the headers using any capitalization we want:


In [ ]:
r.headers['Content-Type']

In [ ]:
r.headers.get('content-type')

Cookies

If a response contains some Cookies, you can quickly access them:


In [ ]:
url = 'http://www.cnn.com'
r = requests.get(url)
print(r.cookies.items())

To send your own cookies to the server, you can use the cookies parameter:


In [ ]:
url = 'http://httpbin.org/cookies'
cookies = dict(cookies_are='working')
r = requests.get(url, cookies=cookies)
r.text

Redirection and History

By default Requests will perform location redirection for all verbs except HEAD.

We can use the history property of the Response object to track redirection.

The Response.history list contains the Response objects that were created in order to complete the request. The list is sorted from the oldest to the most recent response.

For example, GitHub redirects all HTTP requests to HTTPS:


In [ ]:
r = requests.get('http://github.com')
r.url

In [ ]:
r.status_code

In [ ]:
r.history

If you're using GET, OPTIONS, POST, PUT, PATCH or DELETE, you can disable redirection handling with the allow_redirects parameter:


In [ ]:
r = requests.get('http://github.com', allow_redirects=False)

r.status_code

In [ ]:
r.history

If you're using HEAD, you can enable redirection as well:


In [ ]:
r = requests.head('http://github.com', allow_redirects=True)
r.url

In [ ]:
r.history

Timeouts

You can tell Requests to stop waiting for a response after a given number of seconds with the timeout parameter:


In [ ]:
requests.get('http://github.com', timeout=1)

Note

timeout is not a time limit on the entire response download; rather, an exception is raised if the server has not issued a response for timeout seconds (more precisely, if no bytes have been received on the underlying socket for timeout seconds).

Errors and Exceptions

In the event of a network problem (e.g. DNS failure, refused connection, etc), Requests will raise a ConnectionError exception.

Response.raise_for_status() will raise an HTTPError if the HTTP request returned an unsuccessful status code.

If a request times out, a Timeout exception is raised.

If a request exceeds the configured number of maximum redirections, a TooManyRedirects exception is raised.

All exceptions that Requests explicitly raises inherit from requests.exceptions.RequestException.


In [ ]: