In [1]:
# Some styling stuff... ignore this for now!
from IPython.display import HTML
HTML("""<style>
  .rendered_html {font-size: 140%;}
  .rendered_html h1, h2 {text-align:center;}
</style>""")


Out[1]:

Software Engineering for Data Scientists

Manipulating Data with Python

CSE 599 B1

Today's Objectives

1. Opening & Navigating the IPython Notebook

2. Simple Math in the IPython Notebook

3. Loading data with pandas

4. Cleaning and Manipulating data with pandas

5. Visualizing data with pandas

1. Opening and Navigating the IPython Notebook

We will start today with the interactive environment that we will be using often through the course: the IPython/Jupyter Notebook.

We will walk through the following steps together:

  1. Download miniconda (be sure to get Version 3.5) and install it on your system (hopefully you have done this before coming to class)

  2. Use the conda command-line tool to update your package listing and install the IPython notebook:

    Update conda's listing of packages for your system:

    $ conda update conda

    Install IPython notebook and all its requirements

    $ conda install ipython-notebook
  3. Navigate to the directory containing the course material. For example:

    $ cd ~/courses/CSE599/

    You should see a number of files in the directory, including these:

    $ ls
    ...
    Breakout-Simple-Math.ipynb
    CSE599_Lecture_2.ipynb
    ...
  4. Type ipython notebook in the terminal to start the notebook

    $ ipython notebook

    If everything has worked correctly, it should automatically launch your default browser

  5. Click on CSE599_Lecture_2.ipynb to open the notebook containing the content for this lecture.

With that, you're set up to use the IPython notebook!

2. Simple Math in the IPython Notebook

Now that we have the IPython notebook up and running, we're going to do a short breakout exploring some of the mathematical functionality that Python offers.

Please open Breakout-Simple-Math.ipynb, find a partner, and make your way through that notebook, typing and executing code along the way.

3. Loading data with pandas

With this simple Python computation experience under our belt, we can now move to doing some more interesting analysis.

Python's Data Science Ecosystem

In addition to Python's built-in modules like the math module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python. Some of the most important ones are:

numpy: Numerical Python

Numpy is short for "Numerical Python", and contains tools for efficient manipulation of arrays of data. If you have used other computational tools like IDL or MatLab, Numpy should feel very familiar.

scipy: Scientific Python

Scipy is short for "Scientific Python", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more. We will not look closely at Scipy today, but we will use its functionality later in the course.

pandas: Labeled Data Manipulation in Python

Pandas is short for "Panel Data", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a Data Frame. If you've used the R statistical language (and in particular the so-called "Hadley Stack"), much of the functionality in Pandas should feel very familiar.

matplotlib: Visualization in Python

Matplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly).

Installing Pandas & friends

Because the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like conda. All it takes is to run

$ conda install numpy scipy pandas matplotlib

and (so long as your conda setup is working) the packages will be downloaded and installed on your system.

Loading Data with Pandas


In [ ]:
import pandas

Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern:


In [ ]:
import pandas as pd

Now we can use the read_csv command to read the comma-separated-value data:


In [ ]:


In [ ]:

Viewing Pandas Dataframes

The head() and tail() methods show us the first and last rows of the data


In [ ]:


In [ ]:

The shape attribute shows us the number of elements:


In [ ]:

The columns attribute gives us the column names


In [ ]:

The index attribute gives us the index names


In [ ]:

The dtypes attribute gives the data types of each column:


In [ ]:

4. Manipulating data with pandas

Here we'll cover some key features of manipulating data with pandas

Access columns by name using square-bracket indexing:


In [ ]:

Mathematical operations on columns happen element-wise:


In [ ]:

Columns can be created (or overwritten) with the assignment operator. Let's create a tripminutes column with the number of minutes for each trip


In [ ]:

Working with Times

One trick to know when working with columns of times is that Pandas DateTimeIndex provides a nice interface for working with columns of times:


In [ ]:

With it, we can extract, the hour of the day, the day of the week, the month, and a wide range of other views of the time:


In [ ]:


In [ ]:


In [ ]:


In [ ]:

Simple Grouping of Data

The real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at value counts and the basics of group-by operations.

Value Counts

Pandas includes an array of useful functionality for manipulating and analyzing tabular data. We'll take a look at two of these here.

The pandas.value_counts returns statistics on the unique values within each column.

We can use it, for example, to break down rides by gender:


In [ ]:

Or to break down rides by age:


In [ ]:

What else might we break down rides by?


In [ ]:


In [ ]:


In [ ]:

Group-by Operation

One of the killer features of the Pandas dataframe is the ability to do group-by operations. You can visualize the group-by like this (image borrowed from the Python Data Science Handbook)


In [ ]:
from IPython.display import Image
Image('split_apply_combine.png')

So, for example, we can use this to find the average length of a ride as a function of time of day:


In [ ]:

The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)

<data object>.groupby(<grouping values>).<aggregate>()

You can even group by multiple values: for example we can look at the trip duration by time of day and by gender:


In [ ]:

The unstack() operation can help make sense of this type of multiply-grouped data:


In [ ]:

5. Visualizing data with pandas

Of course, looking at tables of data is not very intuitive. Fortunately Pandas has many useful plotting functions built-in, all of which make use of the matplotlib library to generate plots.

Whenever you do plotting in the IPython notebook, you will want to first run this magic command which configures the notebook to work well with plots:


In [ ]:
%matplotlib inline

Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data:


In [ ]:

Adjusting the Plot Style

The default formatting is not very nice; I often make use of the Seaborn library for better plotting defaults.

First you'll have to

$ conda install seaborn

and then you can do this:


In [ ]:
import seaborn
seaborn.set()

And now re-run the plot from above:


In [ ]:

Other plot types

Pandas supports a range of other plotting types; you can find these by using the autocomplete on the plot method:


In [ ]:

For example, we can create a histogram of trip durations:


In [ ]:

If you'd like to adjust the x and y limits of the plot, you can use the set_xlim() and set_ylim() method of the resulting object:


In [ ]:

Breakout: Exploring the Data

  1. Make a plot of the total number of rides as a function of month of the year (You'll need to extract the month, use a groupby, and find the appropriate aggregation to count the number in each group).

In [ ]:

  1. Split this plot by gender. Do you see any seasonal ridership patterns by gender?

In [ ]:

  1. Split this plot by user type. Do you see any seasonal ridership patterns by usertype?

In [ ]:

  1. Repeat the above three steps, counting the number of rides by time of day rather thatn by month.

In [ ]:

  1. Are there any other interesting insights you can discover in the data using these tools?

In [ ]:


In [ ]:

Looking Forward to Homework

In the homework this week, you will have a chance to apply some of these patterns to a brand new (but closely related) dataset.