SparkMeasure is a tool for performance troubleshooting of Apache Spark workloads
It simplifies the collection and analysis of Spark performance metrics. It is also intended as a working example of how to use Spark listeners for collecting and processing Spark executors task metrics data.
References:
Architecture:
Contact: Luca.Canali@cern.ch, February 2019
In [1]:
# Install Spark
# Note: This installs the latest Spark version (version 2.4.3, as tested in May 2019)
!pip install pyspark
In [2]:
from pyspark.sql import SparkSession
# Create Spark Session
# This example uses a local cluster, you can modify master to use YARN or K8S if available
# This example downloads sparkMeasure 0.14 for scala 2_11 from maven central
spark = SparkSession \
.builder \
.master("local[*]") \
.appName("Test sparkmeasure instrumentation of Python/PySpark code") \
.config("spark.jars.packages","ch.cern.sparkmeasure:spark-measure_2.11:0.14") \
.getOrCreate()
In [3]:
# test that Spark is working OK
spark.sql("select 1 as id, 'Hello world!' as Greeting").show()
In [4]:
# Install the Python wrapper API for spark-measure
!pip install sparkmeasure
In [5]:
# Load the Python API in sparkmeasure package
# an attache the sparkMeasure Listener for stagemetrics to the active Spark session
from sparkmeasure import StageMetrics
stagemetrics = StageMetrics(spark)
In [6]:
# Define cell and line magic to wrap the instrumentation
from IPython.core.magic import (register_line_magic, register_cell_magic, register_line_cell_magic)
@register_line_cell_magic
def sparkmeasure(line, cell=None):
"run and measure spark workload. Use: %sparkmeasure or %%sparkmeasure"
val = cell if cell is not None else line
stagemetrics.begin()
eval(val)
stagemetrics.end()
stagemetrics.print_report()
In [7]:
%%sparkmeasure
spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show()
In [8]:
# Print additional metrics from accumulables
stagemetrics.print_accumulables()
In [9]:
# You can also explicitly Wrap your Spark workload into stagemetrics instrumentation
# as in this example
stagemetrics.begin()
spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show()
stagemetrics.end()
# Print a summary report
stagemetrics.print_report()
In [10]:
# Another way to encapsulate code and instrumentation in a compact form
stagemetrics.runandmeasure(locals(), """
spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show()
""")
Collecting Spark task metrics at the granularity of each task completion has additional overhead compare to collecting at the stage completion level, therefore this option should only be used if you need data with this finer granularity, for example because you want to study skew effects, otherwise consider using stagemetrics aggregation as preferred choice.
In [11]:
from sparkmeasure import TaskMetrics
taskmetrics = TaskMetrics(spark)
taskmetrics.begin()
spark.sql("select count(*) from range(1000) cross join range(1000) cross join range(100)").show()
taskmetrics.end()
taskmetrics.print_report()
In [ ]: