Archive for March, 2013

OpenCL on ARM (part 1)

Posted: March 29, 2013 in linaro, OpenCL

A few weeks past before Linaro Connect I had started to see what might be available for OpenCL implementations on ARM. Via a little bit of googling it seemed that the only choice would going to be for boards with a Mali 6xx GPU. This was ok since that basically boils down to the Arndale board and the Samsung Chromebook. Both good options since I happened to have a Chromebook.

I downloaded the Mali OpenCL SDK which can be found from their site.

It didn’t take long following the instructions when I realized the SDK isn’t like most SDKs. Not contained within this SDK was any form of Mali OpenCL driver. Within the Mali SDK it contains a lib directory which when you type make (and you probably have to fix their Makefile to make it work) it will yield a libOpenCL.so it’s just that it’s essentially an empty stub library. You can compile and link against what is provided but when you try and run nothing will happen. Within this library is just a long list of functions with no implementation behind it. None. Not very useful.

Via this discussion, at the very bottom we see a bit of an explanation as to why.

We (ARM) do provide a build of Linux containing the OpenCL driver to select partners under a specific license, but this is not public at this time

So they leave it to the maker of the board to at their open distribute a driver. This gives the board maker the option to not support OpenCL at all if they so choose. Ok, I respect that and it makes sense, since just because a Mali T6xx part is on a board doesn’t mean that it’s wired up universally the same way which may require some driver specific change. It’s conjecture on my part since obviously we’ve no view into the source code as it’s not Open Source.

That said, the Insignal discussion can be found on their boards here. Simply put, not yet available for Linux but supposedly available for their Android Jelly Bean.

Hrumph!

I like Android but the problem is at Connect I gave up my Arndale board to one of my coworkers. I haven’t ordered a new one since the wait times are impressively long and currently they are sold out again at HowChip.

Android to my knowledge doesn’t run on the ARM based Samsung Chromebook so I’m out of options.

Next I did a little splunking within the ChromeOS file system on my Chromebook to see if I might find something to suggest that OpenCL was there. I didn’t find libOpenCL.so in any of the usual places so it’s probably safe to say ChromeOS doesn’t make use of OpenCL. No chance of copying over any binaries for use on Linux.

So backing up what other options do I have? Well I do have an OSX option. Putting together an OpenCL HelloWorld there is quite easy. Still.

I’ve a couple Intel Linux boxes, at least it would be a place to get my feet wet in the meantime and be more in line with what OpenCL on ARM linux will be like. So on Ubuntu I proceeded. There are two options. Either Intel’s VCSource or AMD’s APP SDK both proclaiming OpenCL support.

Let’s talk about how the OpenCL infrastructure is installed. First the includes that are best placed at /usr/include/CL. Not needed of course for runtime. Next if you put the contents of each respective SDK’s lib directory into /usr/lib/OpenCL/vendor/  intel or amd then you can have both SDK’s installed at the same time. These are needed at runtime. Next you have /etc/OpenCL/vendors which will have a number of .icd files. You only need one but with multiple SDKs you’ll have more than one.  The ICD is Installable Client Driver. IE This points to the real driver. This is required at runtime. For libOpenCL.so it looks to the icd to specify which driver(s) to use making libOpenCL.so more of a traffic cop between your application that uses libOpenCL.so and the real driver. Next within /etc/ld.so.conf.d you’ll have a new file that points to where the shared libraries are. So in my case these point to /usr/lib/OpenCL/vendors/intel and /usr/lib/OpenCL/vendors/amd in separate files. Last I have symlinks for OpenCL.so OpenCL.so.1 and OpenCL.so.1.2 that all point into the libOpenCL.so implementation I’m using such as the one in /usr/lib/OpenCL/vendors/amd.

AMD’s APP wants to set the environment variable AMDAPPSDKROOT=”/opt/AMDAPP” and does so in /etc/profile.

So knowing these aspects of setup, I proceeded to try out a simple HelloWorld app that would get the list of devices, create a context and spawn off some simple work to validation things are sane.

Let’s talk about how well things work with the Intel and AMD SDKs.

Intel’s SDK for Linux indicates they only support a limited set of CPUs. GPUs are not supported. Neither is the i7 CPU which is what my laptop has. Tried to run. Fail!  Intel’s SDK does not support any of their GPUs. If you want to use their SDK with an i7 for instance you can only do so on Windows! Lame! Further why Intel would have a dep on a very limited set of CPUs is beyond me.

Ok so obviously this wasn’t going to work. Next I switched over to the AMD APP SDK. As it turns out they support OpenCL for just CPUs IE without using a GPUs or for submitting work on both CPUs and GPUs. My laptop and my main intel desktop does not have an ATI GPU so this was essential for me to use the AMD implementation since they only support ATI GPUs, and as it turns out “any” Intel based CPU. Using the AMD supplied HelloWorld OpenCL app, it ran. But.

./HelloWorld
Setting of real/effective user Id to 0/0 failed
FATAL: Module fglrx not found.
Error! Fail to load fglrx kernel module! Maybe you can switch to root user to load kernel module directly
No GPU device available.
Choose CPU as default device.
input string:
GdkknVnqkc
output string:
HelloWorld

fglrx of course is the ATI kernel module. Via OpenCL you can specify that your workload is only going to be directed at CPUs. Even tho you might do so you’ll still get this error every time. Awesome! Least as compared to the Intel offering it runs on any CPU.

Advertisements

Adventures with Lava

Posted: March 27, 2013 in Uncategorized

I’ve been enabling piglit/waffle, a large collection of OpenGL, OpenGL ES and OpenCL tests, for running inside of Linaro’s LAVA test farm. The task is not complete yet but it’s coming along nicely. Here’s some things I’ve learned along the way that I believe are useful to pass along.

First some basics. In order to run in the lava farm, you need a token. You obtain a token from validation.linaro.org.

Token in hand, then you need the following components.

  • An OS image to test that is accessable via the net. (Via HTTP seems best)
  • A job description which lives in a json file. This defines what image to load and what test to run.
  • A test description which is in a yaml formatted file. You’ll want to place this test description within a git repo which is accessable via the net.
  • Something to test which has some form of output.

Let’s work through each element and consider some approaches which will make your life easier.

Testing is a job

At the top level is the json format job file. It loads an OS image and runs an identified test. Within this example, it identifies it wants to run on panda hardware, it loads the references Android components which are at the referenced http addresses.

{
 "timeout": 1800,
 "job_name": "piglit-android",
 "device_type" : "panda",
 "actions": [
 {
 "command": "deploy_linaro_android_image",
 "parameters": {
 "data": "http://people.linaro.org/~tomgall/android-panda/userdata.tar.bz2",
 "boot": "http://people.linaro.org/~tomgall/android-panda/boot.tar.bz2",
 "system": "http://people.linaro.org/~tomgall/android-panda/system.tar.bz2"
 }
 },
 {
 "command": "android_install_binaries"
 },
 {
 "command": "lava_test_shell",
 "parameters": {
 "testdef_repos": [
 {"git-repo": "git://git.linaro.org/people/tomgall/test-definitions.git",
 "testdef": "android/shader_runner_gles2.yaml"
 }],
 "timeout": 600
 }
 },
 {
 "command": "submit_results_on_host",
 "parameters": {
 "stream": "/anonymous/tom-gall/",
 "server": "http://validation.linaro.org/lava-server/RPC2/"
 }
 }
 ]
}

Don’t create a yaml test file for each individual test found in a whole mass collection of tests, be lazy, try to run as many tests in one shot as makes sense. You’ll be happier otherwise you’ll have a boatload of yaml test files to create. That’s to fun to create (even to script) or maintain.

Remember for each yaml file there will be a step to install the OS image, install the test, run the test and then restore the machine back to what it was. So keeping the number of yaml files more to a reasonable number is a wise use of resources.

Within this json file, there are several things to pay attention to. Within the lava_test_shell, I instruct that I want lava to pull from my git repo and from within that git repo to use the android/shader_runner_gles2.yaml file to run. This gets us to the next layer in the onion.

Notice the time out values. There is no magic here. Unfortunately since it’s a little hard to tell if a test run has gone off into the weeds, Lava uses a timer (in seconds) which if passed, causes Lava to bonk the system under test over the head, reboot and install back to a last good known state making it ready for the next testing to come along.

Also look at the submit_results_on_host part. You’ll have your own obviously which is tied to your id. This gives the test an ability via a series of URLs to allow you to get at the results.

 

android/shader_runner_gles2.yaml

metadata:
 name: shader_runner_gles2
 version: 1.0 
 format: "Lava-Test-Shell Test Definition 1.0"
install:
 git-repos:
 - git://git.linaro.org/people/tomgall/test-definitions.git
run:
 steps:
 - "export PIGLIT_PLATFORM=android"
 - "START_DIR=$(pwd)"
 - "cd test-definitions/android/scripts"
 - "./homescreen.sh"
 - "cd $START_DIR"
 - "cd test-definitions/android/scripts"
 - "./piglit-run-shader.sh /system/xbin/piglit/piglit-shader-test/shader_runner_gles2 /data/shader-data"
parse:
 pattern: "(?P<test_case_id>.*-*):\\s+(?P<result>(pass|fail))"
 fixupdict:
 PASS: pass
 FAIL: fail
 SKIP: skip

Within this yaml file we get closer to the testcase we are going to run but we’re not quite there yet. What we have is the git repository again, but in this case it’s going to be installed on the system under test. The run: section is a sequence of commands that will run on our Android system. The homescreen.sh script waits for the homescreen to come up before anything continues on. Since piglit is all graphical in nature we need the boot to have occurred all the way up to the homescreen or we will have lots of failures.

Next we run the piglit-run-shader.sh script which bridges between the Lava world and the piglit world. To this script we pass the test binary to run and a directory where all the shader data is contained. Why?

When in the land of X, do as X does, impedance match between the two universes

Lava has expectations about resulting output. It’s better to use your yaml to call a script and then from that script call your testcase or testcases, interpret the results in the script and then echo out in the format that Lava expects. Let’s look at the piglit-run-shader.sh script

#!/system/bin/sh
# find and loop over the shader tests found
# recursively in the named directory
find ${2} -name *.shader_test -print0 | while read -d $'' file
do
 RESULT=$( ${1} ${file} -auto )
PSTRING="PIGLIT: {'result': 'pass'"
 SSTRING="PIGLIT: {'result': 'skip'"
 FSTRING="PIGLIT: {'result': 'fail'"
case $RESULT in
 *"$PSTRING"*) echo "${file}: pass";;
*"$SSTRING"*) echo "${file}: skip";;
 
 *"$FSTRING"*) echo "${file}: fail";;
*) echo "${file}: fail";;
 esac
done

First notice this is a shell script. The first action is performs is a find on the data file for all the shader_test data files. Then looping over what is found it runs the test program binary that was passed in and with the results, scans it and matches against what piglit outputs in json, then echos out a result in the format that Lava wants. Everybody stays happy.

Running

To run all of this in the lava farm (which is really the easiest way to test it) I use the following command:

lava-tool submit-job https://tom-gall@validation.linaro.org/lava-server/RPC2/ run-glslparser.json

Linaro Connect Asia 2013 was held March 4th – 8th in Hong Kong. 4 of us from the Graphics Working Group were in attendance. I’ve written up our notes and outcomes from the week. They can be found in the Linaro Wiki:

https://wiki.linaro.org/WorkingGroups/Middleware/Graphics/Notes/LCA-2013

We very much look forward to the next Connect which will be in July of this year in Dublin Ireland. Until then, we’ve got some work to do!