A tests executor is a module which supports the execution of a configured set of experiments.
 Each experiment is composed by:
The executor module can be configured to run a set of workloads (wloads) in each different target configuration of a specified set (confs). These wloads and confs can be specified by the "experiments_conf" input dictionary which is described below at Experiments Configuration.
All the results generated by each experiment will be collected in a results folder. The format and content fo the results forlder is detailed in the last cell of Tests execution below.
In [1]:
    
import logging
from conf import LisaLogging
LisaLogging.setup()
    
    
In [2]:
    
import os
import json
from env import TestEnv
from executor import Executor
    
In [3]:
    
# Setup a target configuration
my_target_conf = {
    
    # Target platform and board
    "platform"    : 'linux',
    "board"       : 'juno',
    
    # Target board IP/MAC address
    "host"        : '192.168.0.1',
    
    # Login credentials
    "username"    : 'root',
    "password"    : 'juno',
}
    
The experiments configuration defines the software setups that we need on our hardware target.
This can be given as an argument to an Executor instance or to a TestEnv one.
 
Elements of the experiments configuration:
In [4]:
    
my_experiments_conf = {
    # Folder where all the results will be collected
    "results_dir" : "ExecutorExample",
    # Platform configurations to test: you can specify any number of configurations
    "confs" : [
        {  
            "tag"            : "base",                           # Relevant string to identify configuration
            "flags"          : ["ftrace", "freeze_userspace"],   # Enable FTrace events, freeze userspace while running            
            "sched_features" : "NO_ENERGY_AWARE",                # Disable EAS
            "cpufreq"        : {                                 # Use PERFORMANCE CpuFreq
                "governor" : "performance",
            },
        },
        {
            "tag"            : "eas",                            # Relevant string to identify configuration
            "flags"          : ["ftrace", "freeze_userspace"],   # Enable FTrace events, freeze userspace while running
            "sched_features" : "ENERGY_AWARE",                   # Enable EAS
            "cpufreq"        : {                                 # Use PERFORMANCE CpuFreq
                "governor" : "performance",
            },
        },
    ],
    
    # Workloads to run (on each platform configuration)
    "wloads" : {
        # Run hackbench with 1 group using pipes
        "perf" : {
            "type" : "perf_bench",
            "conf" : {
                "class" : "messaging",
                "params" : {
                    "group" :    1,
                    "loop"  :   10,
                    "pipe"  : True,
                    "thread": True,
                }
            }
        },
        # Run a 20% duty-cycle periodic task
        "rta" : {
            "type" : "rt-app",
            "loadref" : "big",
            "conf" : {
                "class"  : "profile",
                "params"  : {
                    "p20" : {
                        "kind"   : "Periodic",
                        "params" : {
                            "duty_cycle_pct" : 20,
                         },
                    },
                },
            },
        },
    },
    
    # Number of iterations for each workloaditerations
    "iterations" : 1,
}
    
my_test_conf = {
    # FTrace events to collect for all the tests configuration which have
    # the "ftrace" flag enabled
    "ftrace"  : {
         "events" : [
            "sched_switch",
            "sched_wakeup",
            "sched_wakeup_new",
            "cpu_frequency",
         ],
         "buffsize" : 80 * 1024,
    },
    
    # Tools required by the experiments
    "tools"   : [ 'trace-cmd', 'perf' ],
    
    # Modules required by these experiments
    "modules"     : [ 'bl', 'cpufreq', 'cgroups' ],
}
    
In [5]:
    
executor = Executor(TestEnv(target_conf=my_target_conf, test_conf=my_test_conf), my_experiments_conf)
    
    
In [6]:
    
executor.run()
    
    
In [7]:
    
!tree {executor.te.res_dir}