This sample notebook is written in Python and expects the Python 2.7.5 runtime. Make sure the kernel is started and you are connect to it when executing this notebook.
The data source for this example can be found at: http://examples.cloudant.com/crimes/
Replicate the database into your own Cloudant account before you execute this script.
In [245]:
# Import Python stuff
import pprint
from collections import Counter
In [246]:
# Import PySpark stuff
from pyspark.sql import *
from pyspark.sql.functions import udf, asc, desc
from pyspark import SparkContext, SparkConf
from pyspark.sql.types import IntegerType
In [247]:
sc.version
Out[247]:
In [248]:
# sc is an existing SparkContext.
sqlContext = SQLContext(sc)
A Dataframe object can be created directly from a Cloudant database. To configure the database as source, pass these options:
1 - package name that provides the classes (like CloudantDataSource
) implemented in the connector to extend BaseRelation
. For the Cloudant Spark connector this will be com.cloudant.spark
2 - cloudant.host
parameter to pass the Cloudant account name
3 - cloudant.user
parameter to pass the Cloudant user name
4 - cloudant.password
parameter to pass the Cloudant account password
In [250]:
cloudantdata = sqlContext.read.format("com.cloudant.spark").\
option("cloudant.host","examples.cloudant.com").\
option("cloudant.username","examples").\
option("cloudant.password","xxxxx").\
load("crimes")
At this point all transformations and functions should behave as specified with Spark SQL. (http://spark.apache.org/sql/)
There are, however, a number of things the Cloudant Spark connector does not support yet, or things that are simply not working. For that reason we call this connector a BETA release and are only gradually improving it towards GA.
Please direct your any change requests at support@cloudant.com
In [251]:
cloudantdata.printSchema()
In [252]:
cloudantdata.count()
Out[252]:
In [253]:
cloudantdata.select("properties.naturecode").show()
Here we filter only those documents where the naturecode is crime is a disturbance with naturecode DISTRB
In [254]:
disturbDf = cloudantdata.filter("properties.naturecode = 'DISTRB'")
disturbDf.show()
Finally we write the disturbance crimes back to another Cloudant database 'crimes_filtered'
In [255]:
disturbDf.select("properties").write.format("com.cloudant.spark").\
option("cloudant.host","kache.cloudant.com").\
option("cloudant.username","kache").\
option("cloudant.password","xxxxx").\
save("crimes_filtered")
In [256]:
reducedValue = cloudantdata.groupBy("properties.naturecode").count()
reducedValue.printSchema()
I'm converting a Apache Spark Data Frame to a Panda Data Frame first. Matplotlib simply seems to have better support for Pandas today.
Let's also sort the DF by count
first and naturecode
second to produce a sorted graph.
In [259]:
import pandas as pd
pandaDf = reducedValue.orderBy(desc("count"), asc("naturecode")).toPandas()
print(pandaDf)
In [260]:
# This is needed to actually see the plots
%matplotlib inline
# Additional imports frm matplotlib
import matplotlib.pyplot as plt
In [261]:
# The data
values = pandaDf['count']
labels = pandaDf['naturecode']
# The format
plt.gcf().set_size_inches(16, 12, forward=True)
plt.title('Number of crimes by type')
# Barh is a horizontal bar chart with values (x axis) and labels (y axis)
plt.barh(range(len(values)), values)
plt.yticks(range(len(values)), labels)
# Print the plot
plt.show()
In [ ]: