Within the GPGPU team Gil Pitney has been working on Shamrock which is an open source OpenCL implementation. It’s really a friendly fork of the clover project but taken in a bit of a new direction.

Over the past few months Gil has updated it to make use of the new MCJIT from llvm which works much better for ARM processors. Further he’s updated Shamrock so that it uses current llvm. I have a build based on 3.5.0 on my chromebook.

The other part about Gil’s Shamrock work is it will in time also have the ability to drive Keystone hardware which is TI’s ARM + DPSs on board computing solution. Being able to drive DSPs with OpenCL is quite an awesome capability. I do wish I had one of those boards.

The other capability Shamrock has is to provide a CPU driver for OpenCL on ARM. How does it perform? Good question!

I took my OpenCL accelerated sqlite prototype and built it to use the Shamrock CPU only driver. Would you expect that a CPU only OpenCL driver offloading SQL SELECT queries to be faster or would the  sqlite engine?

If you guessed OpenCL running on a CPU only driver, you’re right. Now remember the Samsung ARM based chromebook is a dual A15. The queries are against 100,000 rows in a single table database with 7 columns. Lower numbers are better and times

sql1 took 43653 microseconds
OpenCL handcoded-opencl/sql1.cl Interval took 17738 microseconds
OpenCL Shamrock 2.46x faster
sql2 took 62530 microseconds
OpenCL handcoded-opencl/sql2.cl Interval took 18168 microseconds
OpenCL Shamrock 3.44x faster
sql3 took 110095 microseconds
OpenCL handcoded-opencl/sql3.cl Interval took 18711 microseconds
OpenCL Shamrock 5.88x faster
sql4 took 143278 microseconds
OpenCL handcoded-opencl/sql4.cl Interval took 19612 microseconds
OpenCL Shamrock 7.30x faster
sql5 took 140398 microseconds
OpenCL handcoded-opencl/sql5.cl Interval took 18698 microseconds
OpenCL Shamrock 7.5x faster

These numbers for running on the CPU are pretty consistent and I was concerned there was some error in the process. Yet the returned number of matching rows is the same for both the sqlite engine and the OpenCL versions which helps detect functional problems. I’ve clipped the result row counts from the results above for brevity.

I wasn’t frankly expecting this kind of speed up, especially with a CPU only driver. Yet there it is in black and white. It does speak highly of the capabilities of OpenCL to be more efficient at computing when you have data parallel problems.

Another interesting thing to note in this comparison, the best results achieved have been with the Mali GPU using vload/vstores and thus take advantage of SIMD vector instructions. On a CPU this would equate to use of NEON. The Shamrock CPU only driver doesn’t at the moment have support for vload/vstore so the compiled OpenCL kernel isn’t even using NEON on the CPU to achieve these results.

I run OSX on my laptop. (gasp!) I ssh into my various linux boxes to work on various projects. As I’m doing a little work with Renderscript and my sqlite acceleration project I thought it would be handy to build Android on my OS X laptop. Turns out it’s not entirely difficult and required just one fix to the code.

Ports

There are several projects to bring various linux/unix tools onto OSX. I use MacPorts. Brew is probably another good option. Either way this gives us a foundation of tools that the android build system is going to need.

The install instructions offer an extra easy pkg option.

Next we need to install some software.

sudo port install coreutils findutils pngcrush gsed gnupg

Xcode

Xcode is of course Apple’s development environment for OSX and iOS. You need it, and it can be installed directly out of the App Store.

Java

Make sure you have java installed.

java -version
java version "1.6.0_65"

If you don’t, you’ll get a popup dialog that will ask if you want to install it. Do!

Python

Make sure you have python installed. If I recall correctly that’s a default install with OSX Mavericks.  There is an option to install via ports.

sudo port install python

Repo

Pull down repo.

curl http://commondatastorage.googleapis.com/git-repo-downloads/repo > ~/bin/repo

Make sure you add your ~/bin to your PATH

export PATH="$PATH:~/bin"

Android SDK tools

You need to download the android sdk tools built for the Mac. Download these from here. Extract. At this point I created an android directory and put the tools inside of it.

mkdir -p ~/android
mv <whereever>/android-sdk  ~/android

Filesystem setup

OSX for all it’s joys doesn’t deal with case differences in it’s file system unless you specifically created the file system to do so. The default doesn’t. It’s not 8.3, but it’s still 1990s lame. So you’ll need to create a file system for the Android source code to live in.

Make sure you have the space in your file system. I created a 100 gig file system. I wouldn’t go below 50. I also put this onto my desktop. Makes it easy to double click later to mount it. Feel free to mount it where it works best for you. However remember this location!

hdiutil create -type SPARSE -fs "Case-sensitive Journaled HFS+" -size 100g -volname "android" -attach ~/Desktop/Android

Android source code

Download as you normally would. (note the cd to the location of where you just attached the new HFS case sensitive file system.

cd ~/Desktop/Android
git clone http://android.googlesource.com/platform/manifest.git
git branch -r   // this will show you all the branch options. I was after the latest.
repo init -u git://android.git.kernel.org/platform/manifest.git  -b android-4.4_r1.2
repo sync

Environment Setup

We need to setup a few environment variables. First add the android sdk tools to your path

export PATH=~/android/android-sdk/sdk/platform-tools:$PATH
export BUILD_MAC_SDK_EXPERIMENTAL=1
export LC_CTYPE=C
export LANG=C

The One Fix

So in jni_generator.py there is a slight issue where it doesn’t handle that situation where one of the tool parameters isn’t available. So we need to defensively work around it. (yeah yeah I should just post the patch)

In external/chromium_org/base/android/jni_generator/jni_generator.py

At the top of the file (around line 20) add

import platform

Then lower down add the following if to check for Darwin so that -fpreprocessed isn’t passed:

531   def _RemoveComments(self, contents):
532     # We need to support both inline and block comments, and we need to handle
533     # strings that contain '//' or '/*'. Rather than trying to do all that with
534     # regexps, we just pipe the contents through the C preprocessor. We tell cpp
535     # the file has already been preprocessed, so it just removes comments and
536     # doesn't try to parse #include, #pragma etc.
537     #
538     # TODO(husky): This is a bit hacky. It would be cleaner to use a real Java
539     # parser. Maybe we could ditch JNIFromJavaSource and just always use
540     # JNIFromJavaP; or maybe we could rewrite this script in Java and use APT.
541     # http://code.google.com/p/chromium/issues/detail?id=138941
542     system = platform.system()
543     if system == 'Darwin':
544       cpp_args = ['cpp']
545     else:
546       cpp_args = ['cpp', '-fpreprocessed']
547     p = subprocess.Popen(args=cpp_args,
548                          stdin=subprocess.PIPE,
549                          stdout=subprocess.PIPE,
550                          stderr=subprocess.PIPE)
551     stdout, _ = p.communicate(contents)

Ready To Build

That’s it. Least I hope I captured everything I had to do. Build away.

I’ve posted my initial OpenCL accelerated sqlite prototype code:

http://git.linaro.org/people/tom.gall/sq-cl.git

Don’t get excited. Remember, it’s a prototype and a quite contrived one at that. It doesn’t handle the general case yet and of course it has bugs. But!  It’s interesting and I think shows what’s possible.

Over at the mali developer community that ARM hosts. I happened to mention this work which in a post that ended up resulting in some good suggestions to use of vectors as well as other good feedback. While working with vectors was a bit painful due to the introduction of some bugs on my part, I made my way through it and have some initial numbers with a couple of kernels so I can get an idea just what a difference it makes.

Alot.

The core of the algorithm for sql1 changes from:

    do {
        if ((data[offset].v > 60) && (data[offset].w < 0)) {
            resultArray[roffset].id = data[offset].id;
            resultArray[roffset].v = data[offset].v;
            resultArray[roffset].w = data[offset].w;
            roffset++;
        }
        offset++;
        endRow--;
    } while (endRow);

To

    do {
        v1 = vload4(0, data1+offset);
        v2 = vload4(0, data2+offset);
        r = (v1 > 60) && ( 0 > v2);
        vstore4(r,0, resultMask+offset);
        offset+=4;
        totalRows--;
    } while (totalRows);

With each spin through the loop, the vectorized version of course is operating over 4 values at once to check for a match. Obvious win. To do this the data has to come in in pure columns and I’m using an vector as essentially a bitmask to indicate if that row is a match or not. This requires a post processing loop to spin through and assemble the resulting data into a useful state. For the 100,000 row database I’m using it doesn’t seem to have as much of a performance impact as I thought it might.

For the first sql1 test query the numbers look like this:

CPU sql1 took 43631 microseconds
OpenCL sql1  took 14545 microseconds  (2.99x or 199% better)
OpenCL (using vectors) 4114 microseconds (10.6x better or 960%)

Not bad. sql3 sees even better results:

CPU sql3 took 111020 microseconds
OpenCL sql3 took 44533 microseconds (2.49x  or 149% better)
OpenCL (using vectors) took 4436 microseconds (25.02x or 2402% better)

There’s another factor why these vectorized versions are doing better. With the newer code I am using less registers on the Mali GPU and thus am able to up the number of work units from 64 to 128.

I do have one bug that I need to track down. I am (of course) validating that all the versions are coming up with the same matches. The new vector versions are off by a couple of rows. The missing rows don’t seem to follow a pattern. I’m sure I’ve done something dumb. Now that there is the ability for more eyes on the code perhaps someone will spot it.

I’ve done some more tuning over the past couple of days. I’ve also done some reading about how to make OpenCL perform better on ARM Mali.  In this post I’m going to retrace some of my steps, share what my tests looks like, share what some of my OpenCL looks like, share current performance numbers and last discuss next steps.

Gentle Reminder

This is in many ways still a prototype / proof of concept. My early goals are to get a very good sense what the possible performance of an OpenCL accelerated SQLite would be for the general case.  From this prototype I want to be able to iterate to complete implementation.

 Performance Testing the Prototype

I’m comparing the c apis that SQLite provides as well as my own API that I’ve developed. My API works with the same data structures and is compiled with the SQLite source code. I’m able to have an apples to apples comparison. The end of the performance measurement is always the caller of the API has all the data to the SQL statement that they requested.

While OpenCL has some nice mechanisms to measure the beginning and end of operations, I’m instead using clock_gettime to for time measurements. Example

 if (clock_gettime(CLOCK_MONOTONIC,&startSelectTime)) {
 return -1; 
 }
operation
 if (clock_gettime(CLOCK_MONOTONIC,&endSelectTime)) {
 return -1; 
 }

I am using 13 SQL statements that were defined in the Cuda accelerated SQLite paper by Peter Bakkum and Kevin Skadron. I am further using their same 100,000 row x 7 column database.

The 13 SQL statements are:

char *sql1 = "SELECT id, uniformi, normali5 FROM test WHERE uniformi > 60 AND normali5 < 0";
 char* sql2 = "SELECT id, uniformf, normalf5 FROM test WHERE uniformf > 60 AND normalf5 < 0";
 char *sql3 ="SELECT id, uniformi, normali5 FROM test WHERE uniformi > -60 AND normali5 < 5";
 char *sql4 ="SELECT id, uniformf, normalf5 FROM test WHERE uniformf > -60 AND normalf5 < 5";
 char *sql5 ="SELECT id, normali5, normali20 FROM test WHERE (normali20 + 40) > (uniformi - 10)";
 char *sql6 ="SELECT id, normalf5, normalf20 FROM test WHERE (normalf20 + 40) > (uniformf - 10)";
 char *sql7 ="SELECT id, normali5, normali20 FROM test WHERE normali5 * normali20 BETWEEN -5 AND 5";
 char *sql8 ="SELECT id, normalf5, normalf20 FROM test WHERE normalf5 * normalf20 BETWEEN -5 AND 5";
 char *sql9 ="SELECT id, uniformi, normali5, normali20 FROM test WHERE NOT uniformi OR NOT normali5 OR NOT normali20";
 char *sql10 ="SELECT id, uniformf, normalf5, normalf20 FROM test WHERE NOT uniformf OR NOT normalf5 OR NOT normalf20";
 char *sql11 ="SELECT SUM(normalf20) FROM test";
 char *sql12 ="SELECT AVG(uniformi) FROM test WHERE uniformi > 0";
 char *sql13 ="SELECT MAX(normali5), MIN(normali5) FROM test";

The queries are a good starting point however I do feel the need to add to them as they avoided the use of character data for instance. For a start however I find them reasonable.

A bit about SQLite

There isn’t much magic when it comes to SQLite or really most other databases when you think about it. You have rows of data. Within those rows there are some number of columns. Your query might be interested in the 1st, 3rd and 5th column but nothing else. You might have some sort of test or sum or average you want performed as part of your query. As it stands today, SQLite will parse out the SQL statement and turn that into a bytecode program. The bytecode program is run in a virtual machine that knows how to look in the right spots for where the columns are, their types, perform operations on the data and so on.

The SQLite virtual machine does not have a just in time compiler.

My OpenCL acceleration essentially picks up at the point after which the bytecode program has been created but not executed. The normal case you’d execute the bytecode program on the CPU and get your data. The OpenCL accelerated case, the OpenCL kernel takes the place of the bytecode program with the database data as input.

In the future instead of bytecode I think a utilization of llvm’s MCJIT to just in time compile for either a CPU or dispatch to a GPU makes the most sense. At a future date, SPIR (OpenCL’s IR) which was generated by the just in time compiler could be then fed into a SPIR supporting driver for GPU offload. In the case of CPUs you’d of course use MCJIT to generate machine instructions.

Evolution of OpenCL SQLite queries

Consider the following SQL statement:

SELECT id, uniformi, normali5 FROM test WHERE uniformi > 60 AND normali5 < 0

Internal to SQLite this comes from a database with 7 columns. id, uniformi and normali5 are the first three columns in the row.

So in our OpenCL code if we can use the following mapping:

typedef struct tag_my_struct {
 int id; 
 int uniformi;
 int normali5;
 } Row;

(Note I did try to use an int3 or an int2 and an int in the data structure but I could never get it to work. With queries involving say 4 ints, I have used int4 vectors and that works really well)

This gives us the following code to run for each row:

 do {
    if ((data[offset].uniformi > 60) && (data[offset].normali5 < 0)) {
       resultArray[roffset].id = data[offset].id;
       resultArray[roffset].uniformi = data[offset].uniformi;
       resultArray[roffset].normali5 = data[offset].normali5; 
       roffset++;
    } 
    offset++;
    endRow--;
 } while (endRow);

Next it comes down to how many rows should a work unit handle?

We’re fortunate in that within our computational problem, we don’t have a dependency between rows. If we had 100000 rows to consider and 100000 GPU cores every GPU core could work on one row and that would be that. Silly, but you could do that.

Now if you’re thinking to yourself the output in the resultArray has to be different from each run of the kernel and the roffset will be different from each work unit run and have to be picked up and considered as part of the results you’re entirely correct.

There is some amount of post processing work that needs to be performed.

The time it takes to do any sort of data copying / munging into the GPUs memory + the time it takes to run the OpenCL kernel + the time to copy results out and do any post processing needs to be compared to the time the original non OpenCL accelerated SQLite implementation would have taken.

Tuning for Mali

It’s important to understand the architecture of the device(s) you maybe running on. In my case I’m using a Mali T604 on a ARM based Samsung Chromebook.  There a couple of papers I recommend reading, especially the first one.

http://malideveloper.arm.com/downloads/GPU_Pro_5/GronqvistLokhmotov_white_paper.pdf 

http://malideveloper.arm.com/develop-for-mali/tutorials-developer-guides/developer-guides/mali-t600-series-gpu-opencl-developer-guide/

For me there were are couple of pieces of advice that had significant gains.

 Avoid array indexing arithmetic

Using the first sql1 query I first accessed the elements of data in the following way:

 if ((data[offset+1] > 60) && (data[offset+2] < 0)) {

This is slower as compared to the method of using an array of data structures such as:

typedef struct tag_my_struct {
 int id; 
 int uniformi;
 int normali5;
 } Row;
 if ((data[offset].uniformi > 60) && (data[offset].normali5 < 0)) {

Vectors are even better but as mentioned I wasn’t able to get a structure with an int3 or an int2 and an int to work. A structure with float4 or int4 vectors however worked just fine.

typedef struct tag_my_struct {
 float4 v;
 } Row;

How about the use of vload/vstore? For my data I haven’t as of yet seen a noticeable performance improvement using them.  This is on my list to revisit.

The memory architecture of Mali is important to understand and take advantage of.  First access to OpenCL global and local data has the same cost.

Buffers

How one creates memory buffers is important and has a performance impact.

clCreateBuffer(CL_MEM_ALLOC_HOST_PTR)  is the preferred method.

clCreateBuffer(CL_MEM_USE_HOST_PTR)  uses a buffer allocated by malloc. Unfortunately it means that the contents of that malloced region must be copied into storage accessible by Mali.  CL_MEM_ALLOC_HOST_PTR avoid the copy.

On desktop GPUs memory on the graphics card is disjoint from memory that the CPU core(s) have direct access to. As such operations such as

 err = clEnqueueReadBuffer(s->queue, s->results_gpu->roffsetResults ....

make sense. However this is slower with ARM and Mali. Instead

 t = clEnqueueMapBuffer(s->queue, s->results_gpu->roffsetResults ....

is must faster. In my case I started with clEnqueueReadBuffer and via my new found knowledge in the guides moved to clEneueMapBuffer.

Kernel Work Units

How many work units one should use is not always a straight forward value. Gronqvist & Lokhmotov note that the theoretical maximum for Mali is 256 but the actual number is based on how many registers are being used by the OpenCL kernel. Four and under 256 is the right value. Between Four and Eight, 128 and between Eight and Sixteen 64 work units.

For my kernels I noticed that my integer based kernels 64 work units was the right choice. For my kernels that mix integer and floating point 128 was the right setting. I don’t have an explanation for this.

Early Results

Here’s what I’m seeing for results across the 13 queries. CPU times are with SQLite version built with -O2 and running on a Cortex-A15. GPU times are from a Mali T604.

SELECT id, uniformi, normali5 FROM test WHERE uniformi > 60 AND normali5 < 0

CPU sql1 took 43631 microseconds

OpenCL sql1  took 14545 microseconds  (2.99x better or 199% better)


SELECT id, uniformf, normalf5 FROM test WHERE uniformf > 60 AND normalf5 < 0

CPU sql2 took 62785 microseconds

OpenCL sql2 took 7756 microseconds (8.09x better or 709% better)


SELECT id, uniformi, normali5 FROM test WHERE uniformi > -60 AND normali5 < 5

CPU sql3 took 114448 microseconds

OpenCL sql3.cl took 44533 microseconds (2.56x better or 156% better)


SELECT id, uniformf, normalf5 FROM test WHERE uniformf > -60 AND normalf5 < 5

CPU sql4 took 139694 microseconds

OpenCL sql4.cl took 20911 microseconds (6.68x better or 568% better)


SELECT id, normali5, normali20 FROM test WHERE (normali20 + 40) > (uniformi – 10)

CPU sql5 took 138859 microseconds

OpenCL sql5.cl took 48834 microseconds (2.84x better or 184% better)


SELECT id, normalf5, normalf20 FROM test WHERE (normalf20 + 40) > (uniformf – 10)

CPU sql6 took 163830 microseconds

OpenCL sql6.cl took 22712 microseconds (7.21x better or 621% better)


SELECT id, normali5, normali20 FROM test WHERE normali5 * normali20 BETWEEN -5 AND 5

CPU sql7 took 82662 microseconds

OpenCL sql7.cl took 20669 microseconds (3.99x better or 299% better)


SELECT id, normalf5, normalf20 FROM test WHERE normalf5 * normalf20 BETWEEN -5 AND 5

CPU sql8 took 96882 microseconds

OpenCL sql8.cl took 10854 microseconds (8.92x better or 792% better)


SELECT id, uniformi, normali5, normali20 FROM test WHERE NOT uniformi OR NOT normali5 OR NOT normali20

CPU sql9 took 74317 microseconds

OpenCL sql9.cl took 12955 microseconds (5.73x better of 473% better)


SELECT id, uniformf, normalf5, normalf20 FROM test WHERE NOT uniformf OR NOT normalf5 OR NOT normalf20

CPU sql10 took 91617 microseconds

OpenCL sql10.cl took 7524 microseconds (12.17x better or 1117% better)


SELECT SUM(normalf20) FROM test

CPU sql11 took 44995 microseconds

OpenCL sql11 took 2190 microseconds (20.54x or 1954% better)


SELECT AVG(uniformi) FROM test WHERE uniformi > 0

CPU sql12 took 41000 microseconds

OpenCL sql12.cl took 4236 microseconds (9.67x or 867% better)


SELECT MAX(normali5), MIN(normali5) FROM test

CPU sql13 took 52354 microseconds
OpenCL sql13.cl took 4619 microseconds (11.33x or 1034% better)


This gives us a range of improvements between 2.56x and 20.54x better when OpenCL is used for these queries.

Next Steps

I will post the prototype code to git.linaro.org. I plan to do this once I’m able to handle character data. Guessing that should be in about a week. It’ll give me some time to also cut off a useless prehensile tail or two.

From here I want to add character data into the mix and then begin the construction of the general purpose path to generate and execute the resulting OpenCL kernel. I want to gather a wider range of  performance numbers to understand what the minimum number of rows of data needs to be before it makes sense to use OpenCL.

I would like to run this code on a SoC with a 628 Mali  to see how the performance changes.

I would like to run this code on a different GPU such as Qualcomm’s Adreno.

Last  this approach certainly could use Renderscript instead of OpenCL. Obviously it would only work on Android but that’s perfectly fine and I think well worth the time. My “current” stable Android platform is an original Nexus 7 which while it runs KitKat I’m not sure would be the best choice as compared to the current Nexus 7 which has a 400 MHz quad-core Adreno 320.   Time to review the hardware budget.

Late on a Saturday night and I’m working on my monitor tan. It’s spring, can’t be too early to prepare for summer of course!

I’ve taken the following sql queries and run them both with the traditional sqlite c apis as well as with my OpenCL accelerated APIs. These queries are the same that Bakkum et all used in their cuda accelerated sqlite paper.

char *sql11 ="SELECT SUM(normalf20) FROM test";
char *sql12 ="SELECT AVG(uniformi) FROM test WHERE uniformi > 0";
char *sql13 ="SELECT MAX(normali5), MIN(normali5) FROM test";

Straight sqlite with my A15 based Samsung Chromebook yields:

sql11 took 95399 microseconds
sql12 took 86576 microseconds
sql13 took 121898 microseconds

My OpenCL APIs yields the following for the same queries:

OpenCL sql11 took 46098 microseconds
OpenCL sql12 took 55524 microseconds
OpenCL sql13 took 64802 microseconds

The data is the same for both straight C sqlite apis and OpenCL apis 100,000 rows to process from one database with one table. The time measured is the time to perform the query across all selected data and for the end user API to obtain the data. For OpenCL this includes the copying out of the data. For the straight c apis this includes the time accessing the one row.

I’m not applying any sort of statistical process or test to these results. That’ll be a later step to assert a confidence interval based on a distribution of collected results.

All in all I don’t think the results are too bad but I’d like to think OpenCL should be able do better. Time to spend a little time with perf as well as do a little digging to see what might be available from Mali developer to analyze performance on the GPU.

These microbenchmarks are important to me. They give a guide as far as what might be accomplished with a general purpose solution which is yet to be written.  They also are helping me to form opinions about how to best approach it.

People have side projects. This one is mine.

What if you accelerate the popular sqlite database with OpenCL? This is one of the ideas that was floated as part of the GPGPU team to get a feel for what might be accomplished on ARM hardware with a mobile GPU.

In my case I’m using the Mali opencl drivers, running with ubuntu linux on a Samsung Chromebook which includes a dual core A15 and a Mali T604. You can replicate this same setup following these instructions.

At Linaro Connect Asia 2014 as part of the GPGPU session I gave an overview of the effort but I wasn’t able to give any initial performance numbers since my free time is highly variable and Connect arrived before I was quite ready. At the time I was about a week out from being able to run a microbenchmark or two since I was just getting to the step of writing some of the OpenCL.

Before I get to some initial numbers let me review a bit of what I talked about at Connect.

To accelerate sqlite I’ve initially added an api that sits next to the sqlite C api. My API in time should be able to blend right into the sqlite API so that no code changes would be needed by end user applications.  With sqlite you usually have a call sequence something like :

sqlite3_open(databaseName, &db);
c= sqlite3_prepare_v2(db, sql, -1, &selectAll_statement, NULL);
while (sqlite3_step(selectAll_statement) == SQLITE_ROW) {
    sqlite3_column_TYPE(selectAll_statement,0);
}

The prepare call takes sql and converts it to an expression tree that is translated into a bytecode which is run inside of a VM. The virtual machine is really nothing more than an big switch statement and each case handles an op code that the VM operates over. sqlite doesn’t do any sort of JIT to accelerate it’s operation. (I know what you’re thinking, hold that thought.)

The challenge to make a general purpose acceleration is to take the operation of the VM and move that onto the GPU. I see a few ways to accomplish this. In the past work that Peter Bakkum and Kevin Skadron had done they basically moved the implementation of the VM into the GPU using Cuda. This kind of approach really doesn’t work in my opinion for using OpenCL. Instead I’m currently of the opinion that the output of the sql expression tree ought to be a bit more than just VM bytecodes. I do wonder if utilizing llvm couldn’t offer interesting possibilities including SPIR (the Khronos intermediate representation standard for OpenCL) . Further research for sure.

The opencl accelerated API sequence looks like:

opencl_init(s, db);
opencl_prepare_data(s, sql);
opencl_transfer_data(s);
opencl_select(s, sql, 0);
opencl_transfer_results(s);

At this point, what I’ve managed to do is using a 100,000 row database with 7 columns run the same query using the sqlite c interface and my opencl accelerated interface.

With the sqlite c API the query took 420274 microseconds on my chromebook a dual core A15 cpu running at 1.7 Gz.

The OpenCL accelerated API running on the Mali T604 GPU at 533Mhz(?) from the same Chromebook yields 110289 microseconds. This measured time includes both the running of the OpenCL kernel and the data transfer from the result buffer.

These are early results. Many grains of salt should be applied but over all this seems like good results for a mobile GPU.

The last session has pasted. As I write this, it’s sort of situation out of the twilight zone. I’ve managed to break my glasses. I’m fairly near sited but my vision isn’t good enough for my screen to be in focus at an average distance.

The last day of Connect we had two sessions. Friday is a tough day to run a session both on account of  people being tired. Numbers suffer and it seems like we are all subject a -20 IQ modifier to the technical discussion at times.

Friday Sessions

The ION session covered the current work in progress with John Stultz from the Android team and Sumit Semwal tag team presenting.  Since Plumbers there’s been a good amount of activity.  Colin Cross updated this code a fair amount as it was reviewed. He created a number of tests which John Stultz ported outside of Android, a dummy driver has been put together for testing on non ION enabled graphics stacks and the 115+ patch set was pushed up into staging. As of right now these patches build and run on ARM, x86_64 and ARM64. There’s more to do, the team is working to get the tests running in LAVA. There are a number of design issues yet to be worked out on the dma-buf side of things. The needs to be a constraint aware dma-buf allocation helper functions. Dma-buf needs to try and reuse some of the ION code so they are both reusing the same heaps. Then there needs to be a set of functions within dma-buf that will examine heap flags and allocate memory from the correct ION heap. It all boils down to having a more common infrastructure between dma-buf and ION such that the ION and dma-buf interfaces will rest nicely on what is there instead of being two separate and divergent things.

Benjamin Gaignard presented on Wayland / Weston. He reviewed the current status of the future replacement of X,  it’s status on ARM and how people can use it today. He covered current efforts to address some of the design failings of Wayland/Weston that assume all systems have graphics architectures like Intel. The use of hardware compositors over GPUs on ARM shows a disjoint view of the world as compared to intel that just assumes everyone has a GPU and will want to use things like EGLImage. This is a common theme which we must introduce time and time again to various developer communities who have limited ARM experience. At this point the focus Benjamin has been more to try and introduce into Wayland/Weston the ability to take advantage of dma-buf to promote the sharing of buffers over copying. It’s a slow go especially without the user space dma-buf helps which was from the session yesterday. Wayland/Weston is viable for use on ARM. It’s not perfect and we anticipate more work in this space first with dma-buf and then to take advantage of compositing hardware often found on ARM SoCs.

Summary for the week

Media & Lib Team

I’m pleased that the media team was able to get a list formulated of the next media libraries to port and optimize for AARCH64.  We synced with the ARM team which is also working in the area of optimization. This is vital so that we don’t replicated efforts accidentally.

Display Team

Bibhuti and I were able to sit down and discuss the hwcomposer project. We’ve set the milestones and we’ll set the schedule. I think it’s more than fair to say we’ll be showing code at the next Connect. The next step for Mali driver support on the graphics LSK branch includes further boards depending on their kernel status as well as some discussion about the potential to try and formally upstream the Mali kernel driver even in the face of likely community opposition.  We had great discussions with the LHG and no doubts we’ll be working together to support LHG like we do with other groups.

UMM

As discussed above the UMM team is heads down on creating their initial PoC for connecting the heap allocators to provide map-time post attach allocation. This is code in progress and a very important step in knitting the ION and dma-buf worlds closer together.

GPGPU

This was more of a quiet connect for GPGPU since projects are mid stream. GPGPU is more is a “sprint” like mode than a “connect” like mode. We did release the GPGPU whitepaper to the Friends of OCTO.