This tutorial aims to show some example of trace analysis and visualization using a pre-defined set of analysis and plotting functions provided by the Filters and Trace modules of LISA.
In [1]:
import logging
from conf import LisaLogging
LisaLogging.setup()
In [2]:
# Generate plots inline
%matplotlib inline
# Python modules required by this notebook
import json
import os
In [3]:
# Let's use an example trace
res_dir = './example_results'
tracefile = os.path.join(res_dir, 'trace.dat')
platformfile = os.path.join(res_dir, 'platform.json')
!tree {res_dir}
In [4]:
# Trace events of interest
events_to_parse = [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_frequency",
"cpu_capacity",
]
# Platform description
with open(platformfile, 'r') as fh:
platform = json.load(fh)
logging.info("CPUs max capacities:")
logging.info(" big: %5d (cpus: %s)",
platform['nrg_model']['big']['cpu']['cap_max'],
platform['clusters']['big'])
logging.info("LITTLE: %5d (cpus: %s)",
platform['nrg_model']['little']['cpu']['cap_max'],
platform['clusters']['little'])
# Time range from the analysis
(t_min, t_max) = (0, None)
In [5]:
# Load the LISA::Trace parsing module
from trace import Trace
# The LISA::Trace module is a wrapper of the TRAPpy FTrace module which
# allows to keep track of platform specific details to support the generation
# of
trace = Trace(res_dir, events_to_parse, platform, window=(t_min,t_max))
Notice how some platform specific data are collected and reported by the LISA::Trace module
In [6]:
# This is the standard TRAPpy::FTrace object, already configured for the
# analysis related to the events of interest
ftrace = trace.ftrace
logging.info("List of events identified in the trace:")
for event in ftrace.class_definitions.keys():
logging.info(" %s", event)
In [7]:
# Original TRAPpy::FTrace DataSet are still accessible by specifying the
# trace event name of interest
trace.data_frame.trace_event('sched_load_avg_task').head()
Out[7]:
In [8]:
trace.setXTimeRange(t_min, t_max)
In [9]:
# Get a list of tasks which are the most big in the trace
top_big_tasks = trace.data_frame.top_big_tasks(
min_utilization=None, # Minimum utilization to be considered "big"
# default: LITTLE CPUs max capacity
min_samples=100, # Number of samples over the minimum utilization
)
In [10]:
# The collected information is available for further analysis
top_big_tasks
Out[10]:
In [11]:
# Plot utilization of "big" tasks decorated with platform specific capacity information
trace.analysis.tasks.plotBigTasks()
In [12]:
top_wakeup_tasks = trace.data_frame.top_wakeup_tasks(
min_wakeups=100 # Minimum number of wakeup to be reported
)
In [13]:
top_wakeup_tasks.head()
Out[13]:
In [14]:
trace.analysis.tasks.plotWakeupTasks(per_cluster=False)
In [15]:
trace.analysis.tasks.plotWakeupTasks(per_cluster=True)
In [16]:
trace.data_frame.rt_tasks(min_prio=100)
Out[16]:
Trace class provides an analysis object that allows to perform several types of analysis on data contained in the trace. Currently available analysis types are:
Analysis Object | Description |
---|---|
cpus |
CPUs Analysis |
eas |
EAS-specific functionalities Analysis |
functions |
Functions Profiling Analysis |
frequency |
Frequency Analysis |
status |
System Status Analysis |
tasks |
Tasks Analysis |
Those are easily accessible via:
trace.analysis.<analysis_object>
In [17]:
# Define time ranges for all the time based plots
trace.setXTimeRange(t_min, t_max)
In [18]:
trace.analysis.tasks.plotTasks(top_big_tasks.index.tolist())
In [19]:
# Cluster frequencies
trace.analysis.frequency.plotClusterFrequencies()
Out[19]:
In [20]:
# Plots SchedTune's Energy-Diff Space Filtering
trace.analysis.eas.plotEDiffTime()