This notebook accompanies the agilescientific.com blog post of the same name:
CSV files are the de facto standard way to store data on the web. They are human-readable, easy to parse with multiple tools, and they compress easily. So you need to know how to read and write them in Python.
Nine times out of ten, the way to read and write CSV files with Python is with pandas. We'll do that first. But that's not always an option (maybe you don't want the dependency in your code), so we'll do it lots of ways:
pandascsv...csv.readercsv.DictReader NumPy.We'll also use pandas to read a couple of CSV files from the web (as opposed to from flat files on our computer).
pandasUsing pd.read_csv.
In [1]:
import pandas as pd
In [3]:
fname = "../data/periods.csv"
In [4]:
df = pd.read_csv(fname)
In [5]:
df
Out[5]:
We can get the start of the Permian like this:
In [6]:
df[df.name=="Permian"].start
Out[6]:
Let's fix the start of the Cretaceous:
In [7]:
df.loc[df.name=='Cretaceous', 'start'] = 145.0
df.loc[df.name=='Cretaceous', 'start']
Out[7]:
After you have changed or added to a DataFrame, pandas also makes it very easy to write a CSV file containing your data.
In [8]:
df.to_csv("../data/pdout.csv")
In [9]:
import csv
In [10]:
with open(fname) as f:
reader = csv.reader(f)
data = [row for row in reader]
In [11]:
data
Out[11]:
In [12]:
[d[2] for d in data if d[0]=="Permian"]
Out[12]:
Note that we needed to know the positions of the items in the rows, which we could only get by inspection. We could skip that header row if we wanted to, but there's a better way: use the header as the keys in a dictionary...
In [13]:
with open(fname) as f:
reader = csv.DictReader(f)
data = [row for row in reader]
In [14]:
data
Out[14]:
In [15]:
[d['start'] for d in data if d['name']=="Permian"]
Out[15]:
There is a corresponding DictWriter class for writing CSVs.
Note that pandas has lots of file readers, including ones for:
It can even read the clipboard!
For example, this one is hosted by GitHub. It's publicly readable, so we don't need to authenticate.
In the X Lines of Python: Machine Learning notebook, I read the online file into a buffer, but it turns out you don't need to do this — you can just give pd.read_csv() a URL!
In [24]:
import requests
import io
df = pd.read_csv('https://raw.githubusercontent.com/agile-geoscience/xlines/master/data/periods.csv')
df.head()
Out[24]:
pandas is perfect for this CSV because it's really a table, containing a mixture of data types (strings and floats).
Nonetheless, we can read it as an array... I'm not really into 'named arrays', so I'll just read the two numeric columns.
We'll use np.genfromtxt.
In [17]:
import numpy as np
In [18]:
x = np.genfromtxt(fname, delimiter=',', skip_header=1, usecols=[2,3])
x
Out[18]:
We can write a CSV like so:
In [19]:
np.savetxt("../data/npout.csv", x, delimiter=",", header="start,end")
It used to be easy to anonymously read a public file directly from Google Docs, but now you need an API key. It's not too hard to set up, but you'll need to read some docs.
When you have an API key, put it here...
In [20]:
key = "PUT YOUR KEY HERE"
In [22]:
import json
In [23]:
url = "https://sheets.googleapis.com/v4/spreadsheets/{id}/values/{sheet}"
meta = {"id": "1YlnEGT8uHpRllk7rjAgFFl8V6B5-kl02DBie11PjG9Q",
"sheet": "Sheet1"
}
url = url.format(**meta)
params = {"key": key}
r = requests.get(url, params=params)
j = json.loads(r.text)['values']
df = pd.DataFrame(j[1:], columns=j[0])
df.head()
Out[23]:
In [ ]: