Jupyter/Colab Notebook to Showcase sparkMeasure APIs for Python

SparkMeasure is a tool for performance troubleshooting of Apache Spark workloads
It simplifies the collection and analysis of Spark performance metrics. It is also intended as a working example of how to use Spark listeners for collecting and processing Spark executors task metrics data.

References:

Architecture:

Contact: Luca.Canali@cern.ch, February 2019


In [1]:
# Install Spark 
# Note: This installs the latest Spark version (version 2.4.3, as tested in May 2019)

!pip install pyspark

In [2]:
from pyspark.sql import SparkSession

# Create Spark Session
# This example uses a local cluster, you can modify master to use  YARN or K8S if available 
# This example downloads sparkMeasure 0.14 for scala 2_11 from maven central

spark = SparkSession \
 .builder \
 .master("local[*]") \
 .appName("Test sparkmeasure instrumentation of Python/PySpark code") \
 .config("spark.jars.packages","ch.cern.sparkmeasure:spark-measure_2.11:0.14")  \
 .getOrCreate()

In [3]:
# test that Spark is working OK
spark.sql("select 1 as id, 'Hello world!' as Greeting").show()


+---+------------+
| id|    Greeting|
+---+------------+
|  1|Hello world!|
+---+------------+


In [4]:
# Install the Python wrapper API for spark-measure

!pip install sparkmeasure

In [5]:
# Load the Python API in sparkmeasure package
# an attache the sparkMeasure Listener for stagemetrics to the active Spark session

from sparkmeasure import StageMetrics
stagemetrics = StageMetrics(spark)

In [6]:
# Define cell and line magic to wrap the instrumentation
from IPython.core.magic import (register_line_magic, register_cell_magic, register_line_cell_magic)

@register_line_cell_magic
def sparkmeasure(line, cell=None):
    "run and measure spark workload. Use: %sparkmeasure or %%sparkmeasure"
    val = cell if cell is not None else line
    stagemetrics.begin()
    eval(val)
    stagemetrics.end()
    stagemetrics.print_report()

In [7]:
%%sparkmeasure
spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show()


+---------+
| count(1)|
+---------+
|100000000|
+---------+


Scheduling mode = FIFO
Spark Context default degree of parallelism = 8
Aggregated Spark stage metrics:
numStages => 4
sum(numTasks) => 25
elapsedTime => 1488 (1 s)
sum(stageDuration) => 1456 (1 s)
sum(executorRunTime) => 10085 (10 s)
sum(executorCpuTime) => 9582 (10 s)
sum(executorDeserializeTime) => 172 (0.2 s)
sum(executorDeserializeCpuTime) => 83 (83 ms)
sum(resultSerializationTime) => 10 (10 ms)
sum(jvmGCTime) => 0 (0 ms)
sum(shuffleFetchWaitTime) => 0 (0 ms)
sum(shuffleWriteTime) => 10 (10 ms)
max(resultSize) => 21343 (20.0 KB)
sum(numUpdatedBlockStatuses) => 0
sum(diskBytesSpilled) => 0 (0 Bytes)
sum(memoryBytesSpilled) => 0 (0 Bytes)
max(peakExecutionMemory) => 0
sum(recordsRead) => 2100
sum(bytesRead) => 0 (0 Bytes)
sum(recordsWritten) => 0
sum(bytesWritten) => 0 (0 Bytes)
sum(shuffleTotalBytesRead) => 472 (472 Bytes)
sum(shuffleTotalBlocksFetched) => 8
sum(shuffleLocalBlocksFetched) => 8
sum(shuffleRemoteBlocksFetched) => 0
sum(shuffleBytesWritten) => 472 (472 Bytes)
sum(shuffleRecordsWritten) => 8

In [8]:
# Print additional metrics from accumulables
stagemetrics.print_accumulables()


Aggregated Spark accumulables of type internal.metric. Sum of values grouped by metric name
Name => sum(value) [group by name]

executorCpuTime => 9584 (10 s)
executorDeserializeCpuTime => 85 (85 ms)
executorDeserializeTime => 172 (0.2 s)
executorRunTime => 10085 (10 s)
input.recordsRead => 2100
resultSerializationTime => 10 (10 ms)
resultSize => 44249 (43.0 KB)
shuffle.read.fetchWaitTime => 0 (0 ms)
shuffle.read.localBlocksFetched => 8
shuffle.read.localBytesRead => 472 (472 Bytes)
shuffle.read.recordsRead => 8
shuffle.read.remoteBlocksFetched => 0
shuffle.read.remoteBytesRead => 0 (0 Bytes)
shuffle.read.remoteBytesReadToDisk => 0 (0 Bytes)
shuffle.write.bytesWritten => 472 (472 Bytes)
shuffle.write.recordsWritten => 8
shuffle.write.writeTime => 10 (10 ms)

SQL Metrics and other non-internal metrics. Values grouped per accumulatorId and metric name.
Accid, Name => max(value) [group by accId, name]

   29, data size total => 119 (119 Bytes)
   30, duration total => 2 (2 ms)
   31, number of output rows => 1
   34, aggregate time total => 2 (2 ms)
   36, duration total => 9897 (10 s)
   37, number of output rows => 8
   40, aggregate time total => 9825 (10 s)
   42, number of output rows => 100000000
   43, number of output rows => 1000000
   44, duration total => 9937 (10 s)
   45, number of output rows => 1000
   51, number of output rows => 1000
   57, number of output rows => 100

In [9]:
# You can also explicitly Wrap your Spark workload into stagemetrics instrumentation 
# as in this example
stagemetrics.begin()

spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show()

stagemetrics.end()
# Print a summary report
stagemetrics.print_report()


+---------+
| count(1)|
+---------+
|100000000|
+---------+


Scheduling mode = FIFO
Spark Context default degree of parallelism = 8
Aggregated Spark stage metrics:
numStages => 4
sum(numTasks) => 25
elapsedTime => 1563 (2 s)
sum(stageDuration) => 1541 (2 s)
sum(executorRunTime) => 11610 (12 s)
sum(executorCpuTime) => 11153 (11 s)
sum(executorDeserializeTime) => 37 (37 ms)
sum(executorDeserializeCpuTime) => 15 (15 ms)
sum(resultSerializationTime) => 1 (1 ms)
sum(jvmGCTime) => 24 (24 ms)
sum(shuffleFetchWaitTime) => 0 (0 ms)
sum(shuffleWriteTime) => 1 (1 ms)
max(resultSize) => 21343 (20.0 KB)
sum(numUpdatedBlockStatuses) => 0
sum(diskBytesSpilled) => 0 (0 Bytes)
sum(memoryBytesSpilled) => 0 (0 Bytes)
max(peakExecutionMemory) => 0
sum(recordsRead) => 2100
sum(bytesRead) => 0 (0 Bytes)
sum(recordsWritten) => 0
sum(bytesWritten) => 0 (0 Bytes)
sum(shuffleTotalBytesRead) => 472 (472 Bytes)
sum(shuffleTotalBlocksFetched) => 8
sum(shuffleLocalBlocksFetched) => 8
sum(shuffleRemoteBlocksFetched) => 0
sum(shuffleBytesWritten) => 472 (472 Bytes)
sum(shuffleRecordsWritten) => 8

In [10]:
# Another way to encapsulate code and instrumentation in a compact form

stagemetrics.runandmeasure(locals(), """
spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show()
""")


+---------+
| count(1)|
+---------+
|100000000|
+---------+


Scheduling mode = FIFO
Spark Context default degree of parallelism = 8
Aggregated Spark stage metrics:
numStages => 4
sum(numTasks) => 25
elapsedTime => 1518 (2 s)
sum(stageDuration) => 1493 (1 s)
sum(executorRunTime) => 11473 (11 s)
sum(executorCpuTime) => 11134 (11 s)
sum(executorDeserializeTime) => 54 (54 ms)
sum(executorDeserializeCpuTime) => 18 (18 ms)
sum(resultSerializationTime) => 0 (0 ms)
sum(jvmGCTime) => 40 (40 ms)
sum(shuffleFetchWaitTime) => 0 (0 ms)
sum(shuffleWriteTime) => 1 (1 ms)
max(resultSize) => 21472 (20.0 KB)
sum(numUpdatedBlockStatuses) => 0
sum(diskBytesSpilled) => 0 (0 Bytes)
sum(memoryBytesSpilled) => 0 (0 Bytes)
max(peakExecutionMemory) => 0
sum(recordsRead) => 2100
sum(bytesRead) => 0 (0 Bytes)
sum(recordsWritten) => 0
sum(bytesWritten) => 0 (0 Bytes)
sum(shuffleTotalBytesRead) => 472 (472 Bytes)
sum(shuffleTotalBlocksFetched) => 8
sum(shuffleLocalBlocksFetched) => 8
sum(shuffleRemoteBlocksFetched) => 0
sum(shuffleBytesWritten) => 472 (472 Bytes)
sum(shuffleRecordsWritten) => 8

Example of collecting using Task Metrics

Collecting Spark task metrics at the granularity of each task completion has additional overhead compare to collecting at the stage completion level, therefore this option should only be used if you need data with this finer granularity, for example because you want to study skew effects, otherwise consider using stagemetrics aggregation as preferred choice.


In [11]:
from sparkmeasure import TaskMetrics
taskmetrics = TaskMetrics(spark)

taskmetrics.begin()
spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show()
taskmetrics.end()
taskmetrics.print_report()


+---------+
| count(1)|
+---------+
|100000000|
+---------+


Scheduling mode = FIFO
Spark Contex default degree of parallelism = 8
Aggregated Spark task metrics:
numtasks => 25
elapsedTime => 1478 (1 s)
sum(duration) => 11402 (11 s)
sum(schedulerDelay) => 62
sum(executorRunTime) => 11299 (11 s)
sum(executorCpuTime) => 11208 (11 s)
sum(executorDeserializeTime) => 40 (40 ms)
sum(executorDeserializeCpuTime) => 7 (7 ms)
sum(resultSerializationTime) => 1 (1 ms)
sum(jvmGCTime) => 0 (0 ms)
sum(shuffleFetchWaitTime) => 0 (0 ms)
sum(shuffleWriteTime) => 0 (0 ms)
sum(gettingResultTime) => 0 (0 ms)
max(resultSize) => 2641 (2.0 KB)
sum(numUpdatedBlockStatuses) => 0
sum(diskBytesSpilled) => 0 (0 Bytes)
sum(memoryBytesSpilled) => 0 (0 Bytes)
max(peakExecutionMemory) => 0
sum(recordsRead) => 2100
sum(bytesRead) => 0 (0 Bytes)
sum(recordsWritten) => 0
sum(bytesWritten) => 0 (0 Bytes)
sum(shuffleTotalBytesRead) => 472 (472 Bytes)
sum(shuffleTotalBlocksFetched) => 8
sum(shuffleLocalBlocksFetched) => 8
sum(shuffleRemoteBlocksFetched) => 0
sum(shuffleBytesWritten) => 472 (472 Bytes)
sum(shuffleRecordsWritten) => 8

In [ ]: