04 Spark essentials

Spark context

The notebook deployment includes Spark automatically within each Python notebook kernel. This means that, upon kernel instantiation, there is an SparkContext object called sc immediatelly available in the Notebook, as in a PySpark shell. Let's take a look at it:


In [1]:
?sc

We can inspect some of the SparkContext properties:


In [1]:
# Spark version we are using
print sc.version


1.6.0

In [3]:
# Name of the application we are running
print sc.appName


PySparkShell

In [4]:
# Some configuration variables
print sc.defaultParallelism
print sc.defaultMinPartitions


1
1

In [3]:
# Username running all Spark processes
# --> Note this is a method, not a property
print sc.sparkUser()


sparkvmuser

Spark configuration


In [2]:
# Print out the SparkContext configuration
print sc._conf.toDebugString()


spark.app.name=PySparkShell
spark.eventLog.dir=hdfs://samson01.hi.inet:8020/user/spark/applicationHistory
spark.eventLog.enabled=true
spark.master=yarn-client
spark.rdd.compress=True
spark.serializer.objectStreamReset=100
spark.submit.deployMode=client
spark.yarn.historyServer.address=samson03.hi.inet:18080
spark.yarn.isPython=true
spark.yarn.jar=hdfs://samson01.hi.inet:8020/user/spark/share/lib/spark-assembly.jar
spark.yarn.preserve.staging.files=false

In [7]:
# Another way to get similar information
from pyspark import SparkConf, SparkContext
SparkConf().getAll()


Out[7]:
[(u'spark.eventLog.enabled', u'true'),
 (u'spark.eventLog.dir', u'/var/log/spark'),
 (u'spark.master', u'local[*]'),
 (u'spark.submit.deployMode', u'client'),
 (u'spark.app.name', u'PySparkShell')]

Spark execution modes

We can also take a look at the Spark configuration this kernel is running under, by using the above configuration data:


In [8]:
print sc._conf.toDebugString()


spark.app.name=PySparkShell
spark.eventLog.dir=hdfs://samson01.hi.inet:8020/user/spark/applicationHistory
spark.eventLog.enabled=true
spark.master=yarn-client
spark.rdd.compress=True
spark.serializer.objectStreamReset=100
spark.submit.deployMode=client
spark.yarn.historyServer.address=samson03.hi.inet:18080
spark.yarn.isPython=true
spark.yarn.jar=hdfs://samson01.hi.inet:8020/user/spark/share/lib/spark-assembly.jar
spark.yarn.preserve.staging.files=false

... this includes the execution mode for Spark. The default mode is local, i.e. all Spark processes run locally in the launched Virtual Machine. This is fine for developing and testing with small datasets.

But to run Spark applications on bigger datasets, they must be executed in a remote cluster. This deployment comes with configuration modes for that, which require:

  • network adjustments to make the VM "visible" from the cluster: the virtual machine must be started in bridged mode ( the default Vagrantfile already does it)
  • configuring the addresses for the cluster. This is done within the VM by using the spark-notebook script, such as
    sudo service spark-notebook set-addr <master-ip> <namenode-ip> <historyserver-ip>
  • activating the desired mode, by executing
    sudo service spark-notebook set-mode (local | standalone | yarn)

These operations can also be performed outside the VM by telling vagrant to relay them, e.g.

vagrant ssh -c "sudo service spark-notebook set-mode local"

A trivial test

Let's do a trivial operation that creates an RDD and executes an action on it. So that we can test that the kernel is capable of launching executors


In [1]:
from operator import add

l = sc.parallelize( xrange(10000) )
print l.reduce( add )


49995000

In [ ]: