In the last lecture, we looked at some ways of interacting with the filesystem through Python and how to read data off files stored on the hard drive. We looked at raw text files; however, there are numerous structured formats that these files can take, and we'll explore some of those here. By the end of this lecture, you should be able to:
We've discussed text formats: each line of text in a file can be treated as a string in a list of strings. What else might we encounter in our data science travels?
Easily the most common text file format is the CSV, or comma-separated values format. This is pretty much what it sounds like: if you have (semi-) structured data, you can delineate spaces between data using commas (or, to generalize, other characters like tabs).
As an example, we could represent a matrix very easily using the CSV format. The file storing a 3x3 matrix would look something like this:
1,2,3 4,5,6 7,8,9
Each row is on one line by itself, and the columns are separated by commas.
How can we read a CSV file? One way, potentially, is just do it yourself:
In [ ]:
# File "csv_file.txt" contains the following:
# 1,2,3,4
# 5,6,7,8
# 9,10,11,12
In [4]:
matrix = []
with open("csv_file.txt", "r") as f:
full_file = f.read()
# Split into lines.
lines = full_file.strip().split("\n")
for line in lines:
# Split on commas.
elements = line.strip().split(",")
matrix.append([])
# Convert to integers and store in the list.
for e in elements:
matrix[-1].append(int(e))
print(matrix)
If, however, we'd prefer to use something a little less strip()
-y and split()
-y, Python also has a core csv
module built-in:
In [14]:
import csv
with open("eggs.csv", "w") as csv_file:
file_writer = csv.writer(csv_file)
row1 = ["Sunny-side up", "Over easy", "Scrambled"]
row2 = ["Spam", "Spam", "More spam"]
file_writer.writerow(row1)
file_writer.writerow(row2)
with open("eggs.csv", "r") as csv_file:
print(csv_file.read())
Notice that you first create a file reference, just like before. The one added step, though, is passing that reference to the csv.writer()
function.
Once you've created the file_writer
object, you can call its writerow()
function and pass in a list to the function, and it is automatically written to the file in CSV format!
The CSV readers let you do the opposite: read a line of text from a CSV file directly into a list.
In [15]:
with open("eggs.csv", "r") as csv_file:
file_reader = csv.reader(csv_file)
for csv_row in file_reader:
print(csv_row)
You can use a for
loop to iterate over the rows in the CSV file. In turn, each row is a list, where each element of the list was separated by a comma.
"JSON", short for "JavaScript Object Notation", has emerged as more or less the de facto standard format for interacting with online services. Like CSV, it's a text-based format, but is much more flexible than CSV.
Here's an example: an object in JSON format that represents a person.
In [13]:
person = """
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 25, "pet": "Zuko"},
{"name": "Katie", "age": 33, "pet": "Cisco"}]
}
"""
It looks kind of a like a Python dictionary, doesn't it? You have key-value pairs, and they can accommodate almost any data type. In fact, when JSON objects are converted into native Python data structures, they are represented using dictionaries.
For reading and writing JSON objects, we can use the built-in json
Python module.
In [14]:
import json
(Aside: with CSV files, it was fairly straightforward to eschew the built-in csv
module and do it yourself. With JSON, it is much harder; in fact, there really isn't a case where it's advisable to roll your own over using the built-in json
module)
There are two functions of interest: dumps()
and loads()
. One of them takes a JSON string and converts it to a native Python object, while the other does the opposite.
First, we'll take our JSON string and convert it into a Python dictionary:
In [15]:
python_dict = json.loads(person)
print(python_dict)
And if you want to take a Python dictionary and convert it into a JSON string--perhaps you're about to save it to a file, or send it over the network to someone else--we can do that.
In [16]:
json_string = json.dumps(python_dict)
print(json_string)
At first glance, these two print-outs may look the same, but if you look closely you'll see some differences. Plus, if you tried to index json_string["name"]
you'd get some very strange errors. python_dict["name"]
, on the other hand, should nicely return "Wes"
.
AVOID AT ALL COSTS.
...but if you have to interact with XML data (e.g., you're manually parsing a web page!), Python has a built-in xml
library.
XML is about as general as it gets when it comes to representing data using structured text; you can represent pretty much anything. HTML is an example of XML in practice.
<?xml version="1.0" standalone="yes"?>
<conversation>
<greeting>Hello, world!</greeting>
<response>Stop the planet, I want to get off!</response>
</conversation>
This is about the simplest excerpt of XML in existence. The basic idea is you have tags (delineated by <
and >
symbols) that identify where certain fields begin and end.
Each field has an opening tag, with the name of the field in angled brackets: <field>
. The closing tag is exactly the same, except with a backslash in front of the tag to indicate closing: </field>
These tags can also have their own custom attributes that slightly tweak their behavior (e.g. the standalone="yes"
attribute in the opening <?xml
tag).
You've probably noticed there is a very strong hierarchy of terms in XML. This is not unlike JSON in many ways, and for this reason the following piece of advice is the same: don't try to roll your own XML parser. You'll pull out your hair.
The XML file we'll look at comes directly from the Python documentation for its XML parser:
<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>
In [5]:
import xml.etree.ElementTree as ET # See, even the import statement is stupid complicated.
tree = ET.parse('xml_file.txt')
root = tree.getroot()
print(root.tag) # The root node is "data", so that's what we should see here.
With the root node, we have access to all the "child" data beneath it, such as the various country names:
In [12]:
for child in root:
print("Tag: \"{}\" :: Name: \"{}\"".format(child.tag, child.attrib["name"]))
What happens when we're not dealing with text? After all, images and videos are most certainly not encoded using text. Furthermore, if memory is an issue, converting text into binary formats can help save space.
There are two primary options for reading and writing binary files.
pickle
, or "pickling", is native in Python and very flexible.Pickle has some similarities with JSON. In particular, it uses the same method names, dumps()
and loads()
, for converting between native Python objects and the raw data format. There are several differences, however.
Here's an example of saving (or "serializing") a dictionary using pickle instead of JSON:
In [17]:
import pickle
# We'll use the `python_dict` object from before.
binary_object = pickle.dumps(python_dict)
print(binary_object)
You can kinda see some English in there--mainly, the string constants. But everything else has been encoded in binary. It's much more space-efficient, but complete gibberish until you convert it back into a text format (e.g. JSON) or native Python object (e.g. dictionary).
If, on the other hand, you're using NumPy arrays, then you can use its own built-in binary format for saving and loading your arrays.
In [21]:
import numpy as np
# Generate some data and save it.
some_data = np.random.randint(10, size = (3, 3))
print(some_data)
np.save("my_data.npy", some_data)
Now we can load it back:
In [22]:
my_data = np.load("my_data.npy")
print(my_data)
This is by far the easiest format to work with when you're dealing exclusively with NumPy arrays; don't bother with CSV or pickling. You don't even need to set up file descriptors with the NumPy interface.
That said, there are limitations to NumPy serialization: namely, it can only serialize in binary format things that can be stored in NumPy arrays. This does not include dictionaries!
pickle
, on the other hand, can serialize dictionaries (in fact, it specializes in serializing dictionaries), but like NumPy serialization is also not terribly cross-platform capable.
So basically, some core rules of thumb on what binary format to use:
pickle
.Some questions to discuss and consider:
1: Dictionaries can be very complex; for a good example, just have a look at how big a dictionary representation of a single Tweet is https://dev.twitter.com/overview/api/tweets: there's "created_at"
, which is a string indicating the time the tweet was created; "contributors"
, which is a dictionary unto itself identifying users participating in a thread; "entities"
, a dictionary of lists that includes hashtags and URLs in the tweet; and "user"
, which is another gargantuan dictionary containing all the information about the author of the tweet. What would be a good format to store these tweets in on the hard drive? What if we were sending these tweets somewhere, such as a smartphone app; would we use a different format? Explain.
2: You can actually read raw bytes of a binary file using the standard Python open()
function, provided you supply the special "b"
flag to indicate a binary format. Can you imagine any circumstances under which you'd read a binary file this way?
3: Is there any other format in which we could store the example XML data from this lecture such that we could avoid using XML entirely?
4: NumPy itself has limited CSV-reading capabilities in numpy.loadtxt
. Given its limitations in binary serialization as discussed in this lecture, do you imagine there are limitations on what kind of data it can read from CSV files?
5: What kind of format (binary or text) is a .png image? Could it be stored as the other format? How?