So far, all the data we've worked with have either been manually instantiated as NumPy arrays or lists of strings, or randomly generated. Here we'll finally get to go over reading to and writing from the filesystem. By the end of this lecture, you should be able to:
Text files are probably the most common and pervasive format of data. They can contain almost anything: weather data, stock market activity, literary works, and raw web data.
Text files are also convenient for your own work: once some kind of analysis has finished, it's nice to dump the results into a file you can inspect later.
In [1]:
file_object = open("alice.txt", "r")
contents = file_object.read()
print(contents[:71])
file_object.close()
Yep, I went there.
Let's walk through the code, line by line. First, we have a call to a function open()
that accepts two arguments:
In [2]:
file_object = open("alice.txt", "r")
"/"
, Python will interpret this path to be relative to wherever the Python script is that you're running with this command.These two arguments are part of the function open()
, which then returns a file descriptor. You can think of this kind of like the reference / pointer discussion we had in our prior functions lecture: file_object
is a reference to the file.
The next line is where the magic happens:
In [3]:
contents = file_object.read()
In this line, we're calling the method read()
on the file reference we got in the previous step. This method goes into the file, pulls out everything in it, and sticks it all in the variable contents
. One big string!
In [4]:
print(contents[:71])
...of which I then print the first 71 characters, which contains the name of the book and the author. Feel free to print the entire string contents
; it'll take a few seconds, as you're printing the whole book!
Finally, the last and possibly most important line:
In [5]:
file_object.close()
This statement explicitly closes the file reference, effectively shutting the valve to the file.
Do not underestimate the value of this statement. There are weird errors that can crop up when you forget to close file descriptors. It can be difficult to remember to do this, though; in other languages where you have to manually allocate and release any memory you use, it's a bit easier to remember. Since Python handles all that stuff for us, it's not a force of habit to explicitly shut off things we've turned on.
Fortunately, there's an alternative we can use!
In [6]:
with open("alice.txt", "r") as file_object:
contents = file_object.read()
print(contents[:71])
This code works identically to the code before it. The difference is, by using a with
block, Python intrinsically closes the file descriptor at the end of the block. Therefore, no need to remember to do it yourself! Hooray!
Let's say, instead of Alice in Wonderland, we had some behemoth of a piece of literature: something along the lines of War and Peace or even an entire encyclopedia. Essentially, not something we want to read into Python all at once. Fortunately, we have an alternative:
In [7]:
with open("alice.txt", "r") as file_object:
num_lines = 0
for line_of_text in file_object:
print(line_of_text)
num_lines += 1
if num_lines == 5: break
We can use a for
loop just as we're used to doing with lists. In this case, at each iteration, Python will hand you exactly 1 line of text from the file to handle it however you'd like.
Of course, if you still want to read in the entire file at once, but really like the idea of splitting up the file line by line, there's a function for that, too:
In [8]:
with open("alice.txt", "r") as file_object:
lines_of_text = file_object.readlines()
print(lines_of_text[0])
By using readlines()
instead of plain old read()
, we'll get back a list of strings, where each element of the list is a single line in the text file. In the code snippet above, I've printed the first line of text from the file.
In [9]:
data_to_save = "This is important data. Definitely worth saving."
with open("outfile.txt", "w") as file_object:
file_object.write(data_to_save)
You'll notice two important changes from before:
"r"
argument in the open()
function to "w"
. You guessed it: we've gone from Reading to Writing.write()
on your file descriptor, and pass in the data you want to write to the file (in this case, data_to_save
).If you try this using a new notebook on JupyterHub (or on your local machine), you should see a new text file named "outfile.txt
" appear in the same directory as your script. Give it a shot!
Some notes about writing to a file:
That second point seems a bit harsh, doesn't it? Luckily, there is recourse.
If you find yourself in the situation of writing to a file multiple times, and wanting to keep what you wrote to the file previously, then you're in the market for appending to a file.
This works exactly the same as writing to a file, with one small wrinkle:
In [10]:
data_to_save = "This is ALSO important data. BOTH DATA ARE IMPORTANT."
with open("outfile.txt", "a") as file_object:
file_object.write(data_to_save)
The only change that was made was switching the "w"
in the open()
method to "a"
for, you guessed it, Append. If you look in outfile.txt
, you should see both lines of text we've written.
Some notes on appending to files:
open()
is functionally identical to using "w".write()
multiple times; each call will append the text to the previous text. It's only when you close a descriptor, but then want to open up another one to the same file, that you'd need to switch to append mode.Let's put together what we've seen by writing to a file, appending more to it, and then reading what we wrote.
In [11]:
data_to_save = "This is important data. Definitely worth saving.\n"
with open("outfile.txt", "w") as file_object:
file_object.write(data_to_save)
In [12]:
data_to_save = "This is ALSO important data. BOTH DATA ARE IMPORTANT."
with open("outfile.txt", "a") as file_object:
file_object.write(data_to_save)
In [13]:
with open("outfile.txt", "r") as file_object:
contents = file_object.readlines()
print("LINE 1: {}".format(contents[0]))
print("LINE 2: {}".format(contents[1]))
We've discussed text formats: each line of text in a file can be treated as a string in a list of strings. What else might we encounter in our data science travels?
As an example, we could represent a matrix very easily using the CSV format. The file storing a 3x3 matrix would look something like this:
1,2,3 4,5,6 7,8,9
Each row is on one line by itself, and the columns are separated by commas.
How do we read CSV files? Python has a core csv
module built-in:
In [14]:
import csv
with open("eggs.csv", "w") as csv_file:
file_writer = csv.writer(csv_file)
row1 = ["Sunny-side up", "Over easy", "Scrambled"]
row2 = ["Spam", "Spam", "More spam"]
file_writer.writerow(row1)
file_writer.writerow(row2)
with open("eggs.csv", "r") as csv_file:
print(csv_file.read())
Notice that you first create a file reference, just like before. The one added step, though, is passing that reference to the csv.writer()
function.
Once you've created the file_writer
object, you can call its writerow()
function and pass in a list to the function, and it is automatically written to the file in CSV format!
The CSV readers let you do the opposite: read a line of text from a CSV file directly into a list.
In [15]:
with open("eggs.csv", "r") as csv_file:
file_reader = csv.reader(csv_file)
for csv_row in file_reader:
print(csv_row)
You can use a for
loop to iterate over the rows in the CSV file. In turn, each row is a list, where each element of the list was separated by a comma.
"JSON", short for "JavaScript Object Notation", has emerged as more or less the de facto standard format for interacting with online services. Like CSV, it's a text-based format, but is much more flexible than CSV.
Here's an example: an object in JSON format that represents a person.
In [16]:
person = """
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 25, "pet": "Zuko"},
{"name": "Katie", "age": 33, "pet": "Cisco"}]
}
"""
It looks kind of a like a Python dictionary, doesn't it? You have key-value pairs, and they can accommodate almost any data type. In fact, when JSON objects are converted into native Python data structures, they are usually represented using dictionaries.
For reading and writing JSON objects, we can use the built-in json
Python module.
In [17]:
import json
There are two functions of interest: dumps()
and loads()
. One of them takes a JSON string and converts it to a native Python object, while the other does the opposite.
First, we'll take our JSON string and convert it into a Python dictionary:
In [18]:
python_dict = json.loads(person)
print(python_dict)
And if you want to take a Python dictionary and convert it into a JSON string--perhaps you're about to save it to a file, or send it over the network to someone else--we can do that.
In [19]:
json_string = json.dumps(python_dict)
print(json_string)
At first glance, these two print-outs may look the same, but if you look closely you'll see some differences. Plus, if you tried to index json_string["name"]
you'd get some very strange errors. python_dict["name"]
, on the other hand, should nicely return "Wes"
.
AVOID AT ALL COSTS.
...but if you have to interact with XML data (e.g., you're manually parsing a web page!), Python has a built-in xml
library.
It's a bit beyond the scope of this course, but feel free to check it out.
What happens when we're not dealing with text? After all, images and videos are most certainly not encoded using text. Furthermore, if memory is an issue, converting text into binary formats can help save space.
There are two primary options for reading and writing binary files.
pickle
, or "pickling", is native in Python and very flexible.Pickle has some similarities with JSON. In particular, it uses the same method names, dumps()
and loads()
, for converting between native Python objects and the raw data format. There are several differences, however.
Here's an example of saving (or "serializing") a dictionary using pickle instead of JSON:
In [20]:
import pickle
# We'll use the `python_dict` object from before.
binary_object = pickle.dumps(python_dict)
print(binary_object)
You can kinda see some English in there--mainly, the string constants. But everything else has been encoded in binary. It's much more space-efficient, but complete gibberish until you convert it back into a text format (e.g. JSON) or native Python object (e.g. dictionary).
If, on the other hand, you're using NumPy arrays, then you can use its own built-in binary format for saving and loading your arrays.
In [21]:
import numpy as np
# Generate some data and save it.
some_data = np.random.randint(10, size = (3, 3))
print(some_data)
np.save("my_data.npy", some_data)
Now we can load it back:
In [22]:
my_data = np.load("my_data.npy")
print(my_data)
This is by far the easiest format to work with when you're dealing exclusively with NumPy arrays; don't bother with CSV or pickling. You don't even need to set up file descriptors with the NumPy interface.
This aspect of programming hasn't been very heavily emphasized--that of error handling--because for the most part, data science is about building models and performing computations so you can make inferences from your data.
...except, of course, from nearly every survey that says your average data scientist spends the vast majority of their time cleaning and organizing their data.
Data is messy and computers are fickle. Just because that file was there yesterday doesn't mean it'll still be there tomorrow. When you're reading from and writing to files, you'll need to put in checks to make sure things are behaving the way you expect, and if they're not, that you're handling things gracefully.
We're going to become good friends with try
and except
whenever we're dealing with files. For example, let's say I want to read again from that Alice in Wonderland file I had:
In [25]:
with open("alicee.txt", "r") as file_object:
contents = file_object.readlines()
print(contents[0])
Whoops. In this example, I simply misnamed the file. In practice, maybe the file was moved; maybe it was renamed; maybe you're getting the file from the user and they incorrectly specified the name. Maybe the hard drive failed, or any number of other "acts of God." Whatever the reason, your program should be able to handle missing files.
You could probably code this up yourself:
In [24]:
filename = "alicee.txt"
try:
with open(filename, "r") as file_object:
contents = file_object.readlines()
print(contents[0])
except FileNotFoundError:
print("Sorry, the file '{}' does not seem to exist.".format(filename))
Pay attention to this: this will most likely show up on future assignments / exams, and you'll be expected to properly handle missing files or incorrect filenames.
Some questions to discuss and consider:
1: In Part 1 of the lecture when we read in and printed the first few lines of Alice in Wonderland, you'll notice each line of text has a blank line in between. Explain why, and how to fix it (so that printing each line shows text of the book the same way you'd see it in the file).
2: Describe the circumstances under which append "a" mode and write "w" mode are identical.
3: Dictionaries can be very complex; for a good example, just have a look at how big a dictionary representation of a single Tweet is https://dev.twitter.com/overview/api/tweets: there's "created_at"
, which is a string indicating the time the tweet was created; "contributors"
, which is a dictionary unto itself identifying users participating in a thread; "entities"
, a dictionary of lists that includes hashtags and URLs in the tweet; and "user"
, which is another gargantuan dictionary containing all the information about the author of the tweet. What would be a good format to store these tweets in on the hard drive? What if we were sending these tweets somewhere, such as a smartphone app; would we use a different format? Explain.
We're pretty much on the regular schedule for the remainder of the semester: Mon / Wed / Fri lectures, and Tues / Thurs assignments.
...with one wrinkle: as announced in L11, I'm holding bi-weekly review sessions, held via Google Hangouts. The next one will be Friday, July 15 at 12pm EDT. Plan to be there if you have any questions!