Cgroups

cgroups (abbreviated from control groups) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes.

A control group is a collection of processes that are bound by the same criteria and associated with a set of parameters or limits. These groups can be hierarchical, meaning that each group inherits limits from its parent group. The kernel provides access to multiple controllers (also called subsystems) through the cgroup interface, for example, the "memory" controller limits memory use, "cpuacct" accounts CPU usage, etc.


In [1]:
import logging
from conf import LisaLogging
LisaLogging.setup()


2016-12-08 11:42:27,154 INFO    : root         : Using LISA logging configuration:
2016-12-08 11:42:27,155 INFO    : root         :   /home/vagrant/lisa/logging.conf

In [2]:
import os
import json
import operator

import devlib
import trappy
import bart

from bart.sched.SchedMultiAssert import SchedMultiAssert
from wlgen import RTA, Periodic

Target Configuration

The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb.


In [3]:
from env import TestEnv

my_conf = {

    # Android Pixel
    "platform"     : "android",
    "board"        : "pixel",
    
    "device"       : "HT6670300102",
    "ANDROID_HOME" : "/home/vagrant/lisa/tools/android-sdk-linux/",
    
    "exclude_modules" : [ "hwmon" ],

    # List of additional devlib modules to install 
    "modules" : ['cgroups', 'bl', 'cpufreq'],
    
    # List of additional binary tools to install
    "tools" : ['rt-app', 'trace-cmd'],
    
    # FTrace events to collect
    "ftrace" : {
         "events" : [
             "sched_switch"
         ],
         "buffsize" : 10240
    }
}

te = TestEnv(my_conf, force_new=True)
target = te.target

# Report target connection
logging.info('Connected to %s target', target.abi)
print "DONE"


2016-12-08 11:42:29,845 INFO    : TestEnv      : Using base path: /home/vagrant/lisa
2016-12-08 11:42:29,845 INFO    : TestEnv      : Loading custom (inline) target configuration
2016-12-08 11:42:29,845 INFO    : TestEnv      : External tools using:
2016-12-08 11:42:29,846 INFO    : TestEnv      :    ANDROID_HOME: /home/vagrant/lisa/tools/android-sdk-linux/
2016-12-08 11:42:29,846 INFO    : TestEnv      :    CATAPULT_HOME: /home/vagrant/lisa/tools/catapult
2016-12-08 11:42:29,847 INFO    : TestEnv      : Loading board:
2016-12-08 11:42:29,847 INFO    : TestEnv      :    /home/vagrant/lisa/libs/utils/platforms/pixel.json
2016-12-08 11:42:29,848 INFO    : TestEnv      : Devlib modules to load: [u'bl', u'cpufreq', 'cgroups']
2016-12-08 11:42:29,848 INFO    : TestEnv      : Connecting Android target [HT6670300102]
2016-12-08 11:42:29,848 INFO    : TestEnv      : Connection settings:
2016-12-08 11:42:29,849 INFO    : TestEnv      :    {'device': 'HT6670300102'}
2016-12-08 11:42:30,008 INFO    : android      : ls command is set to ls -1
2016-12-08 11:42:31,253 INFO    : TestEnv      : Initializing target workdir:
2016-12-08 11:42:31,256 INFO    : TestEnv      :    /data/local/tmp/devlib-target
2016-12-08 11:42:38,346 INFO    : CGroups      : Available controllers:
2016-12-08 11:42:39,117 INFO    : CGroups      :   cpuset       : /data/local/tmp/devlib-target/cgroups/devlib_cgh4
2016-12-08 11:42:39,840 INFO    : CGroups      :   cpu          : /data/local/tmp/devlib-target/cgroups/devlib_cgh3
2016-12-08 11:42:40,638 INFO    : CGroups      :   cpuacct      : /data/local/tmp/devlib-target/cgroups/devlib_cgh1
2016-12-08 11:42:41,416 INFO    : CGroups      :   schedtune    : /data/local/tmp/devlib-target/cgroups/devlib_cgh2
2016-12-08 11:42:42,169 INFO    : CGroups      :   freezer      : /data/local/tmp/devlib-target/cgroups/devlib_cgh0
2016-12-08 11:42:42,287 INFO    : TestEnv      : Topology:
2016-12-08 11:42:42,288 INFO    : TestEnv      :    [[0, 1], [2, 3]]
2016-12-08 11:42:42,691 INFO    : TestEnv      : Loading default EM:
2016-12-08 11:42:42,693 INFO    : TestEnv      :    /home/vagrant/lisa/libs/utils/platforms/pixel.json
2016-12-08 11:42:44,021 INFO    : TestEnv      : Enabled tracepoints:
2016-12-08 11:42:44,022 INFO    : TestEnv      :    sched_switch
2016-12-08 11:42:44,022 INFO    : TestEnv      : Calibrating RTApp...
2016-12-08 11:42:44,259 INFO    : RTApp        : CPU0 calibration...
2016-12-08 11:42:44,328 INFO    : Workload     : Setup new workload rta_calib
2016-12-08 11:42:44,329 INFO    : Workload     : Workload duration defined by longest task
2016-12-08 11:42:44,330 INFO    : Workload     : Default policy: SCHED_OTHER
2016-12-08 11:42:44,330 INFO    : Workload     : ------------------------
2016-12-08 11:42:44,331 INFO    : Workload     : task [task1], sched: {'policy': 'FIFO', 'prio': 0}
2016-12-08 11:42:44,331 INFO    : Workload     :  | calibration CPU: 0
2016-12-08 11:42:44,332 INFO    : Workload     :  | loops count: 1
2016-12-08 11:42:44,334 INFO    : Workload     : + phase_000001: duration 1.000000 [s] (10 loops)
2016-12-08 11:42:44,335 INFO    : Workload     : |  period   100000 [us], duty_cycle  50 %
2016-12-08 11:42:44,335 INFO    : Workload     : |  run_time  50000 [us], sleep_time  50000 [us]
2016-12-08 11:42:44,466 INFO    : Workload     : Workload execution START:
2016-12-08 11:42:44,467 INFO    : Workload     :    /data/local/tmp/bin/taskset 0x1 /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/rta_calib_00.json 2>&1
2016-12-08 11:42:46,114 INFO    : RTApp        : CPU1 calibration...
2016-12-08 11:42:46,183 INFO    : Workload     : Setup new workload rta_calib
2016-12-08 11:42:46,183 INFO    : Workload     : Workload duration defined by longest task
2016-12-08 11:42:46,184 INFO    : Workload     : Default policy: SCHED_OTHER
2016-12-08 11:42:46,184 INFO    : Workload     : ------------------------
2016-12-08 11:42:46,185 INFO    : Workload     : task [task1], sched: {'policy': 'FIFO', 'prio': 0}
2016-12-08 11:42:46,185 INFO    : Workload     :  | calibration CPU: 1
2016-12-08 11:42:46,185 INFO    : Workload     :  | loops count: 1
2016-12-08 11:42:46,186 INFO    : Workload     : + phase_000001: duration 1.000000 [s] (10 loops)
2016-12-08 11:42:46,186 INFO    : Workload     : |  period   100000 [us], duty_cycle  50 %
2016-12-08 11:42:46,186 INFO    : Workload     : |  run_time  50000 [us], sleep_time  50000 [us]
2016-12-08 11:42:46,320 INFO    : Workload     : Workload execution START:
2016-12-08 11:42:46,322 INFO    : Workload     :    /data/local/tmp/bin/taskset 0x2 /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/rta_calib_00.json 2>&1
2016-12-08 11:42:48,012 INFO    : RTApp        : CPU2 calibration...
2016-12-08 11:42:48,084 INFO    : Workload     : Setup new workload rta_calib
2016-12-08 11:42:48,085 INFO    : Workload     : Workload duration defined by longest task
2016-12-08 11:42:48,086 INFO    : Workload     : Default policy: SCHED_OTHER
2016-12-08 11:42:48,086 INFO    : Workload     : ------------------------
2016-12-08 11:42:48,087 INFO    : Workload     : task [task1], sched: {'policy': 'FIFO', 'prio': 0}
2016-12-08 11:42:48,087 INFO    : Workload     :  | calibration CPU: 2
2016-12-08 11:42:48,087 INFO    : Workload     :  | loops count: 1
2016-12-08 11:42:48,088 INFO    : Workload     : + phase_000001: duration 1.000000 [s] (10 loops)
2016-12-08 11:42:48,088 INFO    : Workload     : |  period   100000 [us], duty_cycle  50 %
2016-12-08 11:42:48,088 INFO    : Workload     : |  run_time  50000 [us], sleep_time  50000 [us]
2016-12-08 11:42:48,220 INFO    : Workload     : Workload execution START:
2016-12-08 11:42:48,221 INFO    : Workload     :    /data/local/tmp/bin/taskset 0x4 /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/rta_calib_00.json 2>&1
2016-12-08 11:42:49,900 INFO    : RTApp        : CPU3 calibration...
2016-12-08 11:42:49,968 INFO    : Workload     : Setup new workload rta_calib
2016-12-08 11:42:49,969 INFO    : Workload     : Workload duration defined by longest task
2016-12-08 11:42:49,969 INFO    : Workload     : Default policy: SCHED_OTHER
2016-12-08 11:42:49,969 INFO    : Workload     : ------------------------
2016-12-08 11:42:49,970 INFO    : Workload     : task [task1], sched: {'policy': 'FIFO', 'prio': 0}
2016-12-08 11:42:49,970 INFO    : Workload     :  | calibration CPU: 3
2016-12-08 11:42:49,970 INFO    : Workload     :  | loops count: 1
2016-12-08 11:42:49,971 INFO    : Workload     : + phase_000001: duration 1.000000 [s] (10 loops)
2016-12-08 11:42:49,971 INFO    : Workload     : |  period   100000 [us], duty_cycle  50 %
2016-12-08 11:42:49,971 INFO    : Workload     : |  run_time  50000 [us], sleep_time  50000 [us]
2016-12-08 11:42:50,103 INFO    : Workload     : Workload execution START:
2016-12-08 11:42:50,104 INFO    : Workload     :    /data/local/tmp/bin/taskset 0x8 /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/rta_calib_00.json 2>&1
2016-12-08 11:42:51,757 INFO    : RTApp        : Target RT-App calibration:
2016-12-08 11:42:51,759 INFO    : RTApp        : {"0": 106, "1": 104, "2": 78, "3": 78}
2016-12-08 11:42:51,882 INFO    : RTApp        : big cores are ~36% more capable than LITTLE cores
2016-12-08 11:42:51,884 INFO    : TestEnv      : Using RT-App calibration values:
2016-12-08 11:42:51,886 INFO    : TestEnv      :    {"0": 106, "1": 104, "2": 78, "3": 78}
2016-12-08 11:42:51,888 INFO    : TestEnv      : Set results folder to:
2016-12-08 11:42:51,889 INFO    : TestEnv      :    /home/vagrant/lisa/results/20161208_114251
2016-12-08 11:42:51,891 INFO    : TestEnv      : Experiment results available also in:
2016-12-08 11:42:51,893 INFO    : TestEnv      :    /home/vagrant/lisa/results_latest
2016-12-08 11:42:51,895 INFO    : root         : Connected to arm64 target
DONE

List available Controllers

Details on the available controllers (or subsystems) can be found at: https://www.kernel.org/doc/Documentation/cgroup-v1/.


In [4]:
logging.info('%14s - Available controllers:', 'CGroup')
ssys = target.cgroups.list_subsystems()
for (n,h,g,e) in ssys:
    logging.info('%14s -    %10s (hierarchy id: %d) has %d cgroups',
                 'CGroup', n, h, g)


2016-12-08 11:42:55,652 INFO    : root         :         CGroup - Available controllers:
2016-12-08 11:42:55,715 INFO    : root         :         CGroup -        cpuset (hierarchy id: 4) has 7 cgroups
2016-12-08 11:42:55,717 INFO    : root         :         CGroup -           cpu (hierarchy id: 3) has 2 cgroups
2016-12-08 11:42:55,718 INFO    : root         :         CGroup -       cpuacct (hierarchy id: 1) has 87 cgroups
2016-12-08 11:42:55,718 INFO    : root         :         CGroup -     schedtune (hierarchy id: 2) has 4 cgroups
2016-12-08 11:42:55,719 INFO    : root         :         CGroup -       freezer (hierarchy id: 5) has 1 cgroups

Example of CPUSET controller usage

Cpusets provide a mechanism for assigning a set of CPUs and memory nodes to a set of tasks. Cpusets constrain the CPU and memory placement of tasks to only the resources available within a task's current cpuset. They form a nested hierarchy visible in a virtual file system. These are the essential hooks, beyond what is already present, required to manage dynamic job placement on large systems.

More information can be found in the kernel documentation: https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt.


In [5]:
# Get a reference to the CPUSet controller
cpuset = target.cgroups.controller('cpuset')

In [6]:
# Get the list of current configured CGroups for that controller
cgroups = cpuset.list_all()
logging.info('Existing CGropups:')
for cg in cgroups:
    logging.info('  %s', cg)


2016-12-08 11:42:58,914 INFO    : root         : Existing CGropups:
2016-12-08 11:42:58,915 INFO    : root         :   /
2016-12-08 11:42:58,916 INFO    : root         :   /system-background
2016-12-08 11:42:58,917 INFO    : root         :   /background
2016-12-08 11:42:58,918 INFO    : root         :   /foreground
2016-12-08 11:42:58,918 INFO    : root         :   /foreground/boost
2016-12-08 11:42:58,920 INFO    : root         :   /top-app
2016-12-08 11:42:58,921 INFO    : root         :   /camera-daemon

In [7]:
# Dump the configuraiton of each controller
for cgname in cgroups:
    #print cgname
    cgroup = cpuset.cgroup(cgname)
    attrs = cgroup.get()
    #print attrs
    cpus = attrs['cpus']
    logging.info('%s:%-15s cpus: %s', cpuset.kind, cgroup.name, cpus)


2016-12-08 11:43:01,858 INFO    : root         : cpuset:/               cpus: 0-3
2016-12-08 11:43:02,054 INFO    : root         : cpuset:/system-background cpus: 0-2
2016-12-08 11:43:02,255 INFO    : root         : cpuset:/background     cpus: 0
2016-12-08 11:43:02,450 INFO    : root         : cpuset:/foreground     cpus: 0-2
2016-12-08 11:43:02,649 INFO    : root         : cpuset:/foreground/boost cpus: 0-2
2016-12-08 11:43:02,855 INFO    : root         : cpuset:/top-app        cpus: 0-3
2016-12-08 11:43:03,053 INFO    : root         : cpuset:/camera-daemon  cpus: 0-3

In [8]:
# Create a LITTLE partition
cpuset_littles = cpuset.cgroup('/LITTLE')

In [9]:
# Check the attributes available for this control group
print "LITTLE:\n", json.dumps(cpuset_littles.get(), indent=4)


LITTLE:
{
    "memory_pressure": "0", 
    "memory_spread_page": "0", 
    "notify_on_release": "0", 
    "sched_load_balance": "1", 
    "cpus": "", 
    "effective_mems": "", 
    "memory_spread_slab": "0", 
    "mem_hardwall": "0", 
    "cpu_exclusive": "0", 
    "mem_exclusive": "0", 
    "ls": " /data/local/tmp/devlib-target/cgroups/devlib_cgh4/LITTLE/cpuset.*", 
    "mems": "", 
    "memory_migrate": "0", 
    "sched_relax_domain_level": "-1", 
    "effective_cpus": ""
}

In [10]:
# Tune CPUs and MEMs attributes
#   they must be initialize for the group to be usable
cpuset_littles.set(cpus=target.bl.littles, mems=0)
print "LITTLE:\n", json.dumps(cpuset_littles.get(), indent=4)


LITTLE:
{
    "memory_pressure": "0", 
    "memory_spread_page": "0", 
    "notify_on_release": "0", 
    "sched_load_balance": "1", 
    "cpus": "0-1", 
    "effective_mems": "0", 
    "memory_spread_slab": "0", 
    "mem_hardwall": "0", 
    "cpu_exclusive": "0", 
    "mem_exclusive": "0", 
    "ls": " /data/local/tmp/devlib-target/cgroups/devlib_cgh4/LITTLE/cpuset.*", 
    "mems": "0", 
    "memory_migrate": "0", 
    "sched_relax_domain_level": "-1", 
    "effective_cpus": "0-1"
}

In [11]:
# Define a periodic big (80%) task
task = Periodic(
    period_ms=100,
    duty_cycle_pct=80,
    duration_s=5).get()

# Create one task per each CPU in the target
tasks={}
for tid in enumerate(target.core_names):
    tasks['task{}'.format(tid[0])] = task

# Configure RTA to run all these tasks
rtapp = RTA(target, 'simple', calibration=te.calibration())
rtapp.conf(kind='profile', params=tasks, run_dir=target.working_directory);


2016-12-08 11:43:10,335 INFO    : Workload     : Setup new workload simple
2016-12-08 11:43:10,337 INFO    : Workload     : Workload duration defined by longest task
2016-12-08 11:43:10,338 INFO    : Workload     : Default policy: SCHED_OTHER
2016-12-08 11:43:10,340 INFO    : Workload     : ------------------------
2016-12-08 11:43:10,341 INFO    : Workload     : task [task0], sched: using default policy
2016-12-08 11:43:10,342 INFO    : Workload     :  | calibration CPU: 2
2016-12-08 11:43:10,343 INFO    : Workload     :  | loops count: 1
2016-12-08 11:43:10,343 INFO    : Workload     : + phase_000001: duration 5.000000 [s] (50 loops)
2016-12-08 11:43:10,344 INFO    : Workload     : |  period   100000 [us], duty_cycle  80 %
2016-12-08 11:43:10,344 INFO    : Workload     : |  run_time  80000 [us], sleep_time  20000 [us]
2016-12-08 11:43:10,344 INFO    : Workload     : ------------------------
2016-12-08 11:43:10,345 INFO    : Workload     : task [task1], sched: using default policy
2016-12-08 11:43:10,345 INFO    : Workload     :  | calibration CPU: 2
2016-12-08 11:43:10,345 INFO    : Workload     :  | loops count: 1
2016-12-08 11:43:10,346 INFO    : Workload     : + phase_000001: duration 5.000000 [s] (50 loops)
2016-12-08 11:43:10,346 INFO    : Workload     : |  period   100000 [us], duty_cycle  80 %
2016-12-08 11:43:10,346 INFO    : Workload     : |  run_time  80000 [us], sleep_time  20000 [us]
2016-12-08 11:43:10,347 INFO    : Workload     : ------------------------
2016-12-08 11:43:10,347 INFO    : Workload     : task [task2], sched: using default policy
2016-12-08 11:43:10,348 INFO    : Workload     :  | calibration CPU: 2
2016-12-08 11:43:10,348 INFO    : Workload     :  | loops count: 1
2016-12-08 11:43:10,348 INFO    : Workload     : + phase_000001: duration 5.000000 [s] (50 loops)
2016-12-08 11:43:10,349 INFO    : Workload     : |  period   100000 [us], duty_cycle  80 %
2016-12-08 11:43:10,349 INFO    : Workload     : |  run_time  80000 [us], sleep_time  20000 [us]
2016-12-08 11:43:10,349 INFO    : Workload     : ------------------------
2016-12-08 11:43:10,350 INFO    : Workload     : task [task3], sched: using default policy
2016-12-08 11:43:10,350 INFO    : Workload     :  | calibration CPU: 2
2016-12-08 11:43:10,350 INFO    : Workload     :  | loops count: 1
2016-12-08 11:43:10,351 INFO    : Workload     : + phase_000001: duration 5.000000 [s] (50 loops)
2016-12-08 11:43:10,351 INFO    : Workload     : |  period   100000 [us], duty_cycle  80 %
2016-12-08 11:43:10,351 INFO    : Workload     : |  run_time  80000 [us], sleep_time  20000 [us]

In [12]:
# Test execution of all these tasks into the LITTLE cluster
trace = rtapp.run(ftrace=te.ftrace, cgroup=cpuset_littles.name, out_dir=te.res_dir)


2016-12-08 11:43:13,598 INFO    : Workload     : Workload execution START:
2016-12-08 11:43:13,601 INFO    : Workload     :    /data/local/tmp/bin/shutils cgroups_run_into /LITTLE /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/simple_00.json 2>&1
2016-12-08 11:43:25,156 INFO    : Workload     : Pulling trace file into [/home/vagrant/lisa/results/20161208_114251/simple_00.dat]...

In [13]:
# Check tasks residency on little clsuter
trappy.plotter.plot_trace(trace)



In [14]:
# Compute and visualize tasks residencies on LITTLE clusterh CPUs
s = SchedMultiAssert(trappy.FTrace(trace), te.topology, execnames=tasks.keys())
residencies = s.getResidency('cluster', target.bl.littles, percent=True)
print json.dumps(residencies, indent=4)


{
    "4659": {
        "residency": 100.0, 
        "task_name": "rt-app"
    }, 
    "4660": {
        "residency": 100.0, 
        "task_name": "rt-app"
    }, 
    "4661": {
        "residency": 100.0, 
        "task_name": "rt-app"
    }, 
    "4662": {
        "residency": 100.0, 
        "task_name": "rt-app"
    }
}

In [15]:
# Assert that ALL tasks have always executed only on LITTLE cluster
s.assertResidency('cluster', target.bl.littles,
                  99.9, operator.ge, percent=True, rank=len(residencies))


Out[15]:
True

Example of CPU controller usage

While the CPUSET is a controller to assign CPUs and memory nodes for a set of tasks, the CPU controller is used to assign CPU bandwidth.


In [16]:
# Get a reference to the CPU controller
cpu = target.cgroups.controller('cpu')

In [17]:
# Create a big partition on that CPUS
cpu_littles = cpu.cgroup('/LITTLE')

In [18]:
# Check the attributes available for this control group
print "LITTLE:\n", json.dumps(cpu_littles.get(), indent=4)


LITTLE:
{
    "rt_period_us": "1000000", 
    "shares": "1024", 
    "rt_runtime_us": "0"
}

In [19]:
# Set a 1CPU equivalent bandwidth for that CGroup
# cpu_littles.set(cfs_period_us=100000, cfs_quota_us=50000)
cpu_littles.set(shares=512)
print "LITTLE:\n", json.dumps(cpu_littles.get(), indent=4)


LITTLE:
{
    "rt_period_us": "1000000", 
    "shares": "512", 
    "rt_runtime_us": "0"
}

In [20]:
# Test execution of all these tasks into the LITTLE cluster
trace = rtapp.run(ftrace=te.ftrace, cgroup=cpu_littles.name)


2016-12-08 11:44:14,920 INFO    : Workload     : Workload execution START:
2016-12-08 11:44:14,921 INFO    : Workload     :    /data/local/tmp/bin/shutils cgroups_run_into /LITTLE /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/simple_00.json 2>&1
2016-12-08 11:44:26,513 INFO    : Workload     : Pulling trace file into [.//simple_00.dat]...

In [21]:
# Check tasks residency on little cluster
trappy.plotter.plot_trace(trace)


Example of CPUs isolation


In [22]:
# Isolate CPU0

# This works by moving all user-space tasks into a cpuset
# which does not include the specified list of CPUs to be
# isolated.
sandbox, isolated = target.cgroups.isolate(cpus=[0])

In [23]:
# Check the attributes available for the SANDBOX group
print "Sandbox:\n", json.dumps(sandbox.get(), indent=4)


Sandbox:
{
    "memory_pressure": "0", 
    "memory_spread_page": "0", 
    "notify_on_release": "0", 
    "sched_load_balance": "1", 
    "cpus": "1-3", 
    "effective_mems": "0", 
    "memory_spread_slab": "0", 
    "mem_hardwall": "0", 
    "cpu_exclusive": "0", 
    "mem_exclusive": "0", 
    "ls": " /data/local/tmp/devlib-target/cgroups/devlib_cgh4/DEVLIB_SBOX/cpuset.*", 
    "mems": "0", 
    "memory_migrate": "0", 
    "sched_relax_domain_level": "-1", 
    "effective_cpus": "1-3"
}

In [24]:
# Check the attributes available for the ISOLATED group
print "Isolated:\n", json.dumps(isolated.get(), indent=4)


Isolated:
{
    "memory_pressure": "0", 
    "memory_spread_page": "0", 
    "notify_on_release": "0", 
    "sched_load_balance": "1", 
    "cpus": "0", 
    "effective_mems": "0", 
    "memory_spread_slab": "0", 
    "mem_hardwall": "0", 
    "cpu_exclusive": "0", 
    "mem_exclusive": "0", 
    "ls": " /data/local/tmp/devlib-target/cgroups/devlib_cgh4/DEVLIB_ISOL/cpuset.*", 
    "mems": "0", 
    "memory_migrate": "0", 
    "sched_relax_domain_level": "-1", 
    "effective_cpus": "0"
}

In [25]:
# Run some workload, which is expected to not run in the ISOLATED cpus:
trace = rtapp.run(ftrace=te.ftrace)


2016-12-08 11:44:50,597 INFO    : Workload     : Workload execution START:
2016-12-08 11:44:50,601 INFO    : Workload     :    /data/local/tmp/bin/rt-app /data/local/tmp/devlib-target/simple_00.json 2>&1
2016-12-08 11:44:57,468 INFO    : Workload     : Pulling trace file into [.//simple_00.dat]...

In [26]:
# Check tasks was not running on ISOLATED CPUs
trappy.plotter.plot_trace(trace)



In [27]:
# Compute and visualize tasks residencies on ISOLATED CPUs
s = SchedMultiAssert(trappy.FTrace(trace), te.topology, execnames=tasks.keys())
residencies = s.getResidency('cpu', [0], percent=True)
print json.dumps(residencies, indent=4)


{
    "4968": {
        "residency": 0.0, 
        "task_name": "rt-app"
    }, 
    "4969": {
        "residency": 0.0, 
        "task_name": "rt-app"
    }, 
    "4970": {
        "residency": 0.0, 
        "task_name": "rt-app"
    }, 
    "4971": {
        "residency": 0.0, 
        "task_name": "rt-app"
    }
}

In [28]:
# Assert that ISOLATED CPUs was not running workload tasks
s.assertResidency('cpu', [0], 0.0, operator.eq, percent=True, rank=len(residencies))


Out[28]:
True