I wrote a post on multiprocessing with pandas a little over 2 years back. A lot has changed, and I have started to use dask
and distributed
for distributed computation using pandas
. Here I will show how to implement the multiprocessing with pandas blog using dask
.
For this example, I will download and use the NYC Taxi & Limousine data. In this example I will use the January 2009 Yellow tripdata file (2GB in size), and run on my laptop. Extending to multiple data files and much larger sizes is possible too.
We start by importing dask.dataframe
below.
In [1]:
import dask.dataframe as dd
Any large CSV (and other format) file can be read using a pandas
like read_csv
command.
In [2]:
df = dd.read_csv(r"C:\temp\yellow_tripdata_2009-01.csv")
It is important to understand that unlike the pandas
read_csv
, the above command does not actually load the data. It does some data inference, and leaves the other aspects for later.
Using the npartitions
attribute, we can see how many partitions the data will be broken in for loading. Viewing the raw df
object would give you a shell of the dataframe with column and datatypes inferred. The actual data is not loaded yet.
In [3]:
df.npartitions
Out[3]:
You can infer the columns and datatypes.
In [4]:
df.columns
Out[4]:
In [5]:
df.dtypes
Out[5]:
Computing the length of the dataset can be done by using the size
attribute.
In [6]:
size = df.size
size, type(size)
Out[6]:
As you can see above, the size
does not return a value yet. The computation is actually defferred until we compute
it.
In [7]:
%%time
size.compute()
Out[7]:
This computation comes back with 25MM rows. This computation actually took a while. This is because when we compute
size, we are not only calculating the size of the data, but we are also actually loading the dataset. Now you think that is not very efficient. There are a couple of approaches you can take:
pandas
but in a distributed paradigb. dask
will intelligently load data and process all the computations once by figuring out the various dependencies. This is a great approach if you don't have a lot of RAM available.Now the way to load data in memory is by using the persist
method on the df
object.
In [8]:
df = df.persist()
The above persist
call is non-blocking and you need to wait a bit for the data to load. Once it is loaded, you can compute the size as above.
In [9]:
%%time
df.size.compute()
Out[9]:
That computed instantly. Now you can scale to much larger data sizes and compute in parallel.