Adventures with Lava

Posted: March 27, 2013 in Uncategorized

I’ve been enabling piglit/waffle, a large collection of OpenGL, OpenGL ES and OpenCL tests, for running inside of Linaro’s LAVA test farm. The task is not complete yet but it’s coming along nicely. Here’s some things I’ve learned along the way that I believe are useful to pass along.

First some basics. In order to run in the lava farm, you need a token. You obtain a token from

Token in hand, then you need the following components.

  • An OS image to test that is accessable via the net. (Via HTTP seems best)
  • A job description which lives in a json file. This defines what image to load and what test to run.
  • A test description which is in a yaml formatted file. You’ll want to place this test description within a git repo which is accessable via the net.
  • Something to test which has some form of output.

Let’s work through each element and consider some approaches which will make your life easier.

Testing is a job

At the top level is the json format job file. It loads an OS image and runs an identified test. Within this example, it identifies it wants to run on panda hardware, it loads the references Android components which are at the referenced http addresses.

 "timeout": 1800,
 "job_name": "piglit-android",
 "device_type" : "panda",
 "actions": [
 "command": "deploy_linaro_android_image",
 "parameters": {
 "data": "",
 "boot": "",
 "system": ""
 "command": "android_install_binaries"
 "command": "lava_test_shell",
 "parameters": {
 "testdef_repos": [
 {"git-repo": "git://",
 "testdef": "android/shader_runner_gles2.yaml"
 "timeout": 600
 "command": "submit_results_on_host",
 "parameters": {
 "stream": "/anonymous/tom-gall/",
 "server": ""

Don’t create a yaml test file for each individual test found in a whole mass collection of tests, be lazy, try to run as many tests in one shot as makes sense. You’ll be happier otherwise you’ll have a boatload of yaml test files to create. That’s to fun to create (even to script) or maintain.

Remember for each yaml file there will be a step to install the OS image, install the test, run the test and then restore the machine back to what it was. So keeping the number of yaml files more to a reasonable number is a wise use of resources.

Within this json file, there are several things to pay attention to. Within the lava_test_shell, I instruct that I want lava to pull from my git repo and from within that git repo to use the android/shader_runner_gles2.yaml file to run. This gets us to the next layer in the onion.

Notice the time out values. There is no magic here. Unfortunately since it’s a little hard to tell if a test run has gone off into the weeds, Lava uses a timer (in seconds) which if passed, causes Lava to bonk the system under test over the head, reboot and install back to a last good known state making it ready for the next testing to come along.

Also look at the submit_results_on_host part. You’ll have your own obviously which is tied to your id. This gives the test an ability via a series of URLs to allow you to get at the results.



 name: shader_runner_gles2
 version: 1.0 
 format: "Lava-Test-Shell Test Definition 1.0"
 - git://
 - "export PIGLIT_PLATFORM=android"
 - "START_DIR=$(pwd)"
 - "cd test-definitions/android/scripts"
 - "./"
 - "cd $START_DIR"
 - "cd test-definitions/android/scripts"
 - "./ /system/xbin/piglit/piglit-shader-test/shader_runner_gles2 /data/shader-data"
 pattern: "(?P<test_case_id>.*-*):\\s+(?P<result>(pass|fail))"
 PASS: pass
 FAIL: fail
 SKIP: skip

Within this yaml file we get closer to the testcase we are going to run but we’re not quite there yet. What we have is the git repository again, but in this case it’s going to be installed on the system under test. The run: section is a sequence of commands that will run on our Android system. The script waits for the homescreen to come up before anything continues on. Since piglit is all graphical in nature we need the boot to have occurred all the way up to the homescreen or we will have lots of failures.

Next we run the script which bridges between the Lava world and the piglit world. To this script we pass the test binary to run and a directory where all the shader data is contained. Why?

When in the land of X, do as X does, impedance match between the two universes

Lava has expectations about resulting output. It’s better to use your yaml to call a script and then from that script call your testcase or testcases, interpret the results in the script and then echo out in the format that Lava expects. Let’s look at the script

# find and loop over the shader tests found
# recursively in the named directory
find ${2} -name *.shader_test -print0 | while read -d $'' file
 RESULT=$( ${1} ${file} -auto )
PSTRING="PIGLIT: {'result': 'pass'"
 SSTRING="PIGLIT: {'result': 'skip'"
 FSTRING="PIGLIT: {'result': 'fail'"
case $RESULT in
 *"$PSTRING"*) echo "${file}: pass";;
*"$SSTRING"*) echo "${file}: skip";;
 *"$FSTRING"*) echo "${file}: fail";;
*) echo "${file}: fail";;

First notice this is a shell script. The first action is performs is a find on the data file for all the shader_test data files. Then looping over what is found it runs the test program binary that was passed in and with the results, scans it and matches against what piglit outputs in json, then echos out a result in the format that Lava wants. Everybody stays happy.


To run all of this in the lava farm (which is really the easiest way to test it) I use the following command:

lava-tool submit-job run-glslparser.json

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s