All LISA modules have been updated to use a more consistent logging which can be configured using a single configuraton file:
In [1]:
!head -n12 $LISA_HOME/logging.conf
Each module has a unique name which can be used to assign a priority level for messages generated by that module.
In [2]:
!head -n30 $LISA_HOME/logging.conf | tail -n5
The default logging level for a notebook can also be easily configured using this few lines
In [3]:
import logging
from conf import LisaLogging
LisaLogging.setup(level=logging.INFO)
Juno R0 and Juno R2 boards are now accessible by specifying "juno" in the target configuration. The previous distinction was required because of a different way for the two boards to report HWMON channels. This distinction is not there anymore and thus Juno boards can now be connected using the same platform data.
In [6]:
from env import TestEnv
te = TestEnv({
'platform' : 'linux',
'board' : 'juno',
'host' : '10.1.210.45',
'username' : 'root'
})
target = te.target
Automated LISA tests previously configured the Executor using JSON files. This is still possible, but the existing tests now use Python dictionaries directly in the code. In the short term, this allows de-duplicating configuration elements that are shared between multiple tests. It will later allow more flexible test configuration.
See tests/eas/acceptance.py
for an example of how this is currently used.
https://github.com/ARM-software/lisa/pull/209
A new "files" attribute can be added to Executor configurations which allows to specify a list files (e.g. sysfs and procfs) and values to be written to that files.
For example, the following test configuration:
In [9]:
tests_conf = {
"confs" : [
{
"tag" : "base",
"flags" : "ftrace",
"sched_features" : "NO_ENERGY_AWARE",
"cpufreq" : {
"governor" : "performance",
},
"files" : {
'/proc/sys/kernel/sched_is_big_little' : '0',
'!/proc/sys/kernel/sched_migration_cost_ns' : '500000'
},
}
]
}
can be used to run a test where the platform is configured to
Nortice that a value written in a file is verified only if the file path is prefixed by a '/'. Otherwise, the write never fails, e.g. if the file does not exists.
https://github.com/ARM-software/lisa/pull/227
Executor learned the "freeze_userspace"
conf flag. When this flag is present, LISA uses the devlib freezer to freeze as much of userspace as possible while the experiment workload is executing, in order to reduce system noise.
The Executor
example notebook:
https://github.com/ARM-software/lisa/blob/master/ipynb/examples/utils/executor_example.ipynb
gives an example of using this feature.
When the Trace module is initialized, by default all the tasks in that trace are identified and exposed via the usual getTask() method:
In [15]:
from trace import Trace
import json
with open('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/platform.json', 'r') as fh:
platform = json.load(fh)
trace = Trace('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/trace.dat',
['sched_switch'], platform
))
In [17]:
logging.info("%d tasks loaded from trace", len(trace.getTasks()))
In [19]:
logging.info("The rt-app task in this trace has these PIDs:")
logging.info(" %s", trace.getTasks()['rt-app'])
A new platform definition file has been added which allows to easily setup a connection with an Pixel device:
In [33]:
!cat $LISA_HOME/libs/utils/platforms/pixel.json
In [4]:
from env import TestEnv
te = TestEnv({
'platform' : 'android',
'board' : 'pixel',
'ANDROID_HOME' : '/home/patbel01/Code/lisa/tools/android-sdk-linux/'
}, force_new=True)
target = te.target
A new Android benchmark has been added to run UiBench provided tests. Here is a notebook which provides an example of how to run this test on your android target:
Preliminary tests aim at verifying some basic support required for a complete functional EAS solution.
A initial version of these preliminary tests is now available:
https://github.com/ARM-software/lisa/blob/master/tests/eas/preliminary.py
and it will be extended in the future to include more and more tests.
A new test has been added to verify that capacity capping is working as expected:
https://github.com/ARM-software/lisa/blob/master/tests/eas/capacity_capping.py
The EAS acceptace test collects a set of platform independent tests to verify basic EAS beahviours.
This test has been cleaned up and it's now avaiable with a detailed documentation:
https://github.com/ARM-software/lisa/blob/master/tests/eas/acceptance.py
A new scratchpad folder has been added under the ipynb folder which collects the available notebooks:
In [12]:
!tree -L 1 ~/Code/lisa/ipynb
This folder is configured to be ignored by git, thus it's the best place to place your work-in-progress notebooks.
Example notebooks has been consolidated and better organized by topic:
In [13]:
!tree -L 1 ~/Code/lisa/ipynb/examples
This is the folder to look into when it comes to undedrstand how a specific LISA API works.
Here is where we will provide a dedicated folder and set of notebooks for each of the main LISA modules.