Test environment API - TestEnv

The test environment is primarily defined by the target configuration (see conf below).

One can also pass the test configuration - definining which software setups are needed on the hardware target, and a location for the results of the experiments.

Parameters:

  • target configuration
  • test configuration - more information on this can be found in examples/utils/executor_example.ipynb
  • wipe - if to clean up all previous content from the results folder
  • force new - if to create a new TestEnv object even if there is one available

In [1]:
# One initial cell for imports
import json
import time
import os
import logging

In [2]:
from conf import LisaLogging
LisaLogging.setup()
# For debug information use:
# LisaLogging.setup(level=logging.DEBUG)


2016-12-12 11:51:14,376 INFO    : root         : Using LISA logging configuration:
2016-12-12 11:51:14,376 INFO    : root         :   /home/vagrant/lisa/logging.conf

Test environment setup

Target configuration:

  • platform - the currently supported boards are:

    • linux - accessed via SSH connection
    • android - accessed via ADB connection
    • host - ran on local host
  • board - the currently supported boards are:

    • juno - target is a JUNO board
    • tc2 - target is a TC2 board
    • oak - target is MT8173 platform model
    • pixel - target is a Pixel device
    • hikey - target is a Hikey development platform
    • nexus5x - target is a Nexus 5X device
  • host - target IP or MAC address

  • device - target Android device ID

  • port - port for Android connection - default port is 5555

  • ANDROID_HOME - path to android-sdk-linux

  • username

  • password

  • keyfile - you can either specify a password or a keyfile

  • rtapp-calib - these values are not supposed to be specified at the first run on a target. After the first run, it's best to fill this array with values reported in the log messges for your specific target, for these not to be obtained again.

  • tftp - tftp server from where the target gets kernel/dtb images at each boot

  • modules - devlib modules to be enabled

  • exclude_modules - devlib modules to be disabled

  • tools - binary tools (available under ./tools/$ARCH/) to install by default

  • ping_time - wait time before trying to access the target after reboot

  • reboot_time - maximum time to wait after rebooting the target

  • features - list of test environment features to enable

    • no-kernel - do not deploy kernel/dtb images
    • no-reboot - do not force reboot the target at each configuration change
    • debug - enable debugging messages
  • ftrace - ftrace configuration

    • events
    • functions
    • buffsize
  • results_dir - location of results of the experiments


In [3]:
# Setup a target configuration
conf = {

    # Platform
    "platform"    : "linux",
    # Board
    "board"       : "juno",

    # Login credentials
    "host"        : "192.168.0.1",
    "username"    : "root",
    # You can specify either a password or keyfile
    "password"    : "juno",
    # "keyfile"   : "/complete/path/of/your/keyfile",

    # Tools to deploy
    "tools" : [ "rt-app", "taskset" ],

    "tftp"  : {
        "folder"    : "/var/lib/tftpboot/",
        "kernel"    : "Image",
        "dtb"       : "juno.dtb"
    },

    #"ping_time" : "15",
    #"reboot_time" : "180",

    # RTApp calibration values (comment to let LISA do a calibration run)
    "rtapp-calib" :  {
        "0": 358, "1": 138, "2": 138, "3": 357, "4": 359, "5": 355
    },

    # FTrace configuration
    "ftrace" : {
         "events" : [
             "cpu_idle",
             "sched_switch",
         ],
         "buffsize" : 10240,
    },
    
    # Where results are collected
    "results_dir" : "TestEnvExample",

    #"__features__" : "no-kernel no-reboot"
}

Test environment initialisation


In [5]:
from env import TestEnv

# Initialize a test environment using the provided configuration
te = TestEnv(conf)

Attributes

The initialisation of the test environment pre-initialises some useful environment variables.

This is some of the information available via the TestEnv object:


In [6]:
# The complete configuration of the target we have configured
print json.dumps(te.conf, indent=4)


{
    "username": "root", 
    "ftrace": {
        "buffsize": 10240, 
        "events": [
            "cpu_idle", 
            "sched_switch"
        ]
    }, 
    "rtapp-calib": {
        "1": 138, 
        "0": 358, 
        "3": 357, 
        "2": 138, 
        "5": 355, 
        "4": 359
    }, 
    "host": "192.168.0.1", 
    "password": "juno", 
    "tools": [
        "rt-app", 
        "taskset", 
        "trace-cmd", 
        "taskset", 
        "trace-cmd", 
        "perf", 
        "cgroup_run_into.sh"
    ], 
    "results_dir": "TestEnvExample", 
    "platform": "linux", 
    "board": "juno", 
    "__features__": [], 
    "tftp": {
        "kernel": "Image", 
        "folder": "/var/lib/tftpboot/", 
        "dtb": "juno.dtb"
    }
}

In [7]:
# Last configured kernel and DTB image
print te.kernel
print te.dtb


None
None

In [8]:
# The IP and MAC address of the target
print te.ip
print te.mac


192.168.0.1
None

In [9]:
# A full platform descriptor
print json.dumps(te.platform, indent=4)


{
    "nrg_model": {
        "big": {
            "cluster": {
                "nrg_max": 64
            }, 
            "cpu": {
                "cap_max": 1024, 
                "nrg_max": 616
            }
        }, 
        "little": {
            "cluster": {
                "nrg_max": 57
            }, 
            "cpu": {
                "cap_max": 447, 
                "nrg_max": 93
            }
        }
    }, 
    "clusters": {
        "big": [
            1, 
            2
        ], 
        "little": [
            0, 
            3, 
            4, 
            5
        ]
    }, 
    "cpus_count": 6, 
    "freqs": {
        "big": [
            450000, 
            625000, 
            800000, 
            950000, 
            1100000
        ], 
        "little": [
            450000, 
            575000, 
            700000, 
            775000, 
            850000
        ]
    }, 
    "topology": [
        [
            0, 
            3, 
            4, 
            5
        ], 
        [
            1, 
            2
        ]
    ]
}

In [10]:
# This is a pre-created folder to host the tests results generated using this
# test environment. Notice that the suite could add additional information
# in this folder, like for example a copy of the target configuration
# and other target specific collected information.
te.res_dir


Out[10]:
'/home/vagrant/lisa/results/TestEnvExample'

In [11]:
# The working directory on the target
te.workdir


Out[11]:
'/data/local/schedtest'

In [12]:
# The target topology, which can be used to build BART assertions
te.topology


Out[12]:
cluster [[0, 3, 4, 5], [1, 2]]
cpu [[0], [1], [2], [3], [4], [5]]

Functions

Some methods are also exposed to test developers which could be used to ease the creation of tests.

These are some of the methods available:


In [13]:
# Calibrate RT-App (if required) and get the most updated calibration value
te.calibration()


Out[13]:
{0: 358, 1: 138, 2: 138, 3: 357, 4: 359, 5: 355}

In [14]:
# Generate a JSON file with the complete platform description
te.platform_dump(dest_dir='/tmp')


Out[14]:
({'clusters': {'big': [1, 2], 'little': [0, 3, 4, 5]},
  'cpus_count': 6,
  'freqs': {'big': [450000, 625000, 800000, 950000, 1100000],
   'little': [450000, 575000, 700000, 775000, 850000]},
  'nrg_model': {u'big': {u'cluster': {u'nrg_max': 64},
    u'cpu': {u'cap_max': 1024, u'nrg_max': 616}},
   u'little': {u'cluster': {u'nrg_max': 57},
    u'cpu': {u'cap_max': 447, u'nrg_max': 93}}},
  'topology': [[0, 3, 4, 5], [1, 2]]},
 '/tmp/platform.json')

In [15]:
# Force a reboot of the target (and wait specified [s] before reconnect)
# Keep in mind that a reboot can be disabled from __features__ in the target configuration
te.reboot(reboot_time=360, ping_time=15)


2016-12-12 11:56:15,281 INFO    : TestEnv      : Target (00:02:f7:00:5d:d7) at IP address: 192.168.0.1
2016-12-12 11:56:16,087 INFO    : TestEnv      : Waiting up to 360[s] for target [192.168.0.1] to reboot...
2016-12-12 11:57:21,143 INFO    : TestEnv      : Devlib modules to load: ['bl', 'hwmon', 'cpufreq']
2016-12-12 11:57:21,144 INFO    : TestEnv      : Connecting linux target:
2016-12-12 11:57:21,145 INFO    : TestEnv      :   username : root
2016-12-12 11:57:21,146 INFO    : TestEnv      :       host : 192.168.0.1
2016-12-12 11:57:21,146 INFO    : TestEnv      :   password : juno
2016-12-12 11:57:21,147 INFO    : TestEnv      : Connection settings:
2016-12-12 11:57:21,147 INFO    : TestEnv      :    {'username': 'root', 'host': '192.168.0.1', 'password': 'juno'}
2016-12-12 11:57:37,176 INFO    : TestEnv      : Initializing target workdir:
2016-12-12 11:57:37,177 INFO    : TestEnv      :    /root/devlib-target
2016-12-12 11:57:43,908 INFO    : TestEnv      : Topology:
2016-12-12 11:57:43,908 INFO    : TestEnv      :    [[0, 3, 4, 5], [1, 2]]
2016-12-12 11:57:45,155 INFO    : TestEnv      : Loading default EM:
2016-12-12 11:57:45,156 INFO    : TestEnv      :    /home/vagrant/lisa/libs/utils/platforms/juno.json
2016-12-12 11:57:48,681 INFO    : TestEnv      : Enabled tracepoints:
2016-12-12 11:57:48,684 INFO    : TestEnv      :    cpu_idle
2016-12-12 11:57:48,685 INFO    : TestEnv      :    sched_switch
2016-12-12 11:57:48,688 INFO    : EnergyMeter  : Scanning for HWMON channels, may take some time...
2016-12-12 11:57:48,691 INFO    : EnergyMeter  : Channels selected for energy sampling:
2016-12-12 11:57:48,691 INFO    : EnergyMeter  :    BOARDBIG_energy
2016-12-12 11:57:48,692 INFO    : EnergyMeter  :    BOARDLITTLE_energy

In [18]:
# Resolve a MAC address into an IP address
te.resolv_host(host='00:02:F7:00:5A:5B')


06:03:00  INFO    :   HostResolver - Target (00:02:F7:00:5A:5B) at IP address: 192.168.0.1
Out[18]:
('00:02:F7:00:5A:5B', '192.168.0.1')

In [16]:
# Copy the specified file into the TFTP server folder defined by configuration
te.tftp_deploy('/etc/group')


06:03:00  INFO    :           TFTP - Deploy /etc/group into /var/lib/tftpboot/group

Attributes: target

Access to the devlib API

A special attribute of TestEnv is target, which represents a devlib instance. Using the target attribute we have access to the full set of devlib provided functionalities. A small set of these are exemplified below. For a more extensive set check the examples/devlib notebooks.


In [16]:
# Run a command on the target
te.target.execute("echo -n 'Hello Test Environment'", as_root=False)


Out[16]:
'Hello Test Environment'

In [17]:
# Spawn a command in background on the target
te.target.kick_off("sleep 10", as_root=True)


Out[17]:
''

In [18]:
# Acces to many target specific information
print "ABI                 : ", te.target.abi
print "big Core Family     : ", te.target.big_core
print "LITTLE Core Family  : ", te.target.little_core
print "CPU's Clusters IDs  : ", te.target.core_clusters
print "CPUs type           : ", te.target.core_names


ABI                 :  arm64
big Core Family     :  A57
LITTLE Core Family  :  A53
CPU's Clusters IDs  :  [0, 1, 1, 0, 0, 0]
CPUs type           :  ['A53', 'A57', 'A57', 'A53', 'A53', 'A53']

In [19]:
# Access to big.LITTLE specific information
print "big CPUs IDs        : ", te.target.bl.bigs
print "LITTLE CPUs IDs     : ", te.target.bl.littles
print "big CPUs freqs      : {}".format(te.target.bl.get_bigs_frequency())
print "big CPUs governor   : {}".format(te.target.bl.get_bigs_governor())


big CPUs IDs        :  [1, 2]
LITTLE CPUs IDs     :  [0, 3, 4, 5]
big CPUs freqs      : 450000
big CPUs governor   : interactive

Attributes: emeter (energy meter)

In order to sample energy from the target:


In [20]:
# Reset and sample energy counters
te.emeter.reset()
nrg = te.emeter.sample()
nrg = json.dumps(te.emeter.sample(), indent=4)
print "First read: ", nrg
time.sleep(2)
nrg = te.emeter.sample()
nrg = json.dumps(te.emeter.sample(), indent=4)
print "Second read: ", nrg


First read:  {
    "BOARDBIG": {
        "total": 0.03712299999999935, 
        "last": 5.61159, 
        "delta": 0.019406000000000034
    }, 
    "BOARDLITTLE": {
        "total": 0.017602000000000118, 
        "last": 4.954883, 
        "delta": 0.008766999999999747
    }
}
Second read:  {
    "BOARDBIG": {
        "total": 0.11209199999999964, 
        "last": 5.686559, 
        "delta": 0.018203999999999887
    }, 
    "BOARDLITTLE": {
        "total": 0.06513600000000075, 
        "last": 5.002417, 
        "delta": 0.009789000000000492
    }
}

Attribute: ftrace

You can configure FTrace for a specific experiment using the following:


In [21]:
# Configure a specific set of events to trace
te.ftrace_conf(
    {                                                                                                                                             
         "events" : [                                                                                                                                            
             "cpu_idle",                                                                                                                                         
             "cpu_capacity",
             "cpu_frequency",
             "sched_switch",
         ],                                                                                                                                                      
         "buffsize" : 10240                                                                                                                                      
    }
)


2016-12-12 11:58:44,190 INFO    : TestEnv      : Enabled tracepoints:
2016-12-12 11:58:44,192 INFO    : TestEnv      :    cpu_idle
2016-12-12 11:58:44,193 INFO    : TestEnv      :    cpu_capacity
2016-12-12 11:58:44,194 INFO    : TestEnv      :    cpu_frequency
2016-12-12 11:58:44,194 INFO    : TestEnv      :    sched_switch

In [22]:
# Start/Stop a FTrace session
te.ftrace.start()
te.target.execute("uname -a")
te.ftrace.stop()

In [23]:
# Collect and visualize the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)

# There might be a different display value on your machine
# Check by issuing "echo $DISPLAY" in the LISA VM
output = os.popen("DISPLAY=:10.0 kernelshark {}".format(trace_file))