Archive for March, 2014

I’ve done some more tuning over the past couple of days. I’ve also done some reading about how to make OpenCL perform better on ARM Mali.  In this post I’m going to retrace some of my steps, share what my tests looks like, share what some of my OpenCL looks like, share current performance numbers and last discuss next steps.

Gentle Reminder

This is in many ways still a prototype / proof of concept. My early goals are to get a very good sense what the possible performance of an OpenCL accelerated SQLite would be for the general case.  From this prototype I want to be able to iterate to complete implementation.

 Performance Testing the Prototype

I’m comparing the c apis that SQLite provides as well as my own API that I’ve developed. My API works with the same data structures and is compiled with the SQLite source code. I’m able to have an apples to apples comparison. The end of the performance measurement is always the caller of the API has all the data to the SQL statement that they requested.

While OpenCL has some nice mechanisms to measure the beginning and end of operations, I’m instead using clock_gettime to for time measurements. Example

 if (clock_gettime(CLOCK_MONOTONIC,&startSelectTime)) {
 return -1; 
 }
operation
 if (clock_gettime(CLOCK_MONOTONIC,&endSelectTime)) {
 return -1; 
 }

I am using 13 SQL statements that were defined in the Cuda accelerated SQLite paper by Peter Bakkum and Kevin Skadron. I am further using their same 100,000 row x 7 column database.

The 13 SQL statements are:

char *sql1 = "SELECT id, uniformi, normali5 FROM test WHERE uniformi > 60 AND normali5 < 0";
 char* sql2 = "SELECT id, uniformf, normalf5 FROM test WHERE uniformf > 60 AND normalf5 < 0";
 char *sql3 ="SELECT id, uniformi, normali5 FROM test WHERE uniformi > -60 AND normali5 < 5";
 char *sql4 ="SELECT id, uniformf, normalf5 FROM test WHERE uniformf > -60 AND normalf5 < 5";
 char *sql5 ="SELECT id, normali5, normali20 FROM test WHERE (normali20 + 40) > (uniformi - 10)";
 char *sql6 ="SELECT id, normalf5, normalf20 FROM test WHERE (normalf20 + 40) > (uniformf - 10)";
 char *sql7 ="SELECT id, normali5, normali20 FROM test WHERE normali5 * normali20 BETWEEN -5 AND 5";
 char *sql8 ="SELECT id, normalf5, normalf20 FROM test WHERE normalf5 * normalf20 BETWEEN -5 AND 5";
 char *sql9 ="SELECT id, uniformi, normali5, normali20 FROM test WHERE NOT uniformi OR NOT normali5 OR NOT normali20";
 char *sql10 ="SELECT id, uniformf, normalf5, normalf20 FROM test WHERE NOT uniformf OR NOT normalf5 OR NOT normalf20";
 char *sql11 ="SELECT SUM(normalf20) FROM test";
 char *sql12 ="SELECT AVG(uniformi) FROM test WHERE uniformi > 0";
 char *sql13 ="SELECT MAX(normali5), MIN(normali5) FROM test";

The queries are a good starting point however I do feel the need to add to them as they avoided the use of character data for instance. For a start however I find them reasonable.

A bit about SQLite

There isn’t much magic when it comes to SQLite or really most other databases when you think about it. You have rows of data. Within those rows there are some number of columns. Your query might be interested in the 1st, 3rd and 5th column but nothing else. You might have some sort of test or sum or average you want performed as part of your query. As it stands today, SQLite will parse out the SQL statement and turn that into a bytecode program. The bytecode program is run in a virtual machine that knows how to look in the right spots for where the columns are, their types, perform operations on the data and so on.

The SQLite virtual machine does not have a just in time compiler.

My OpenCL acceleration essentially picks up at the point after which the bytecode program has been created but not executed. The normal case you’d execute the bytecode program on the CPU and get your data. The OpenCL accelerated case, the OpenCL kernel takes the place of the bytecode program with the database data as input.

In the future instead of bytecode I think a utilization of llvm’s MCJIT to just in time compile for either a CPU or dispatch to a GPU makes the most sense. At a future date, SPIR (OpenCL’s IR) which was generated by the just in time compiler could be then fed into a SPIR supporting driver for GPU offload. In the case of CPUs you’d of course use MCJIT to generate machine instructions.

Evolution of OpenCL SQLite queries

Consider the following SQL statement:

SELECT id, uniformi, normali5 FROM test WHERE uniformi > 60 AND normali5 < 0

Internal to SQLite this comes from a database with 7 columns. id, uniformi and normali5 are the first three columns in the row.

So in our OpenCL code if we can use the following mapping:

typedef struct tag_my_struct {
 int id; 
 int uniformi;
 int normali5;
 } Row;

(Note I did try to use an int3 or an int2 and an int in the data structure but I could never get it to work. With queries involving say 4 ints, I have used int4 vectors and that works really well)

This gives us the following code to run for each row:

 do {
    if ((data[offset].uniformi > 60) && (data[offset].normali5 < 0)) {
       resultArray[roffset].id = data[offset].id;
       resultArray[roffset].uniformi = data[offset].uniformi;
       resultArray[roffset].normali5 = data[offset].normali5; 
       roffset++;
    } 
    offset++;
    endRow--;
 } while (endRow);

Next it comes down to how many rows should a work unit handle?

We’re fortunate in that within our computational problem, we don’t have a dependency between rows. If we had 100000 rows to consider and 100000 GPU cores every GPU core could work on one row and that would be that. Silly, but you could do that.

Now if you’re thinking to yourself the output in the resultArray has to be different from each run of the kernel and the roffset will be different from each work unit run and have to be picked up and considered as part of the results you’re entirely correct.

There is some amount of post processing work that needs to be performed.

The time it takes to do any sort of data copying / munging into the GPUs memory + the time it takes to run the OpenCL kernel + the time to copy results out and do any post processing needs to be compared to the time the original non OpenCL accelerated SQLite implementation would have taken.

Tuning for Mali

It’s important to understand the architecture of the device(s) you maybe running on. In my case I’m using a Mali T604 on a ARM based Samsung Chromebook.  There a couple of papers I recommend reading, especially the first one.

http://malideveloper.arm.com/downloads/GPU_Pro_5/GronqvistLokhmotov_white_paper.pdf 

http://malideveloper.arm.com/develop-for-mali/tutorials-developer-guides/developer-guides/mali-t600-series-gpu-opencl-developer-guide/

For me there were are couple of pieces of advice that had significant gains.

 Avoid array indexing arithmetic

Using the first sql1 query I first accessed the elements of data in the following way:

 if ((data[offset+1] > 60) && (data[offset+2] < 0)) {

This is slower as compared to the method of using an array of data structures such as:

typedef struct tag_my_struct {
 int id; 
 int uniformi;
 int normali5;
 } Row;
 if ((data[offset].uniformi > 60) && (data[offset].normali5 < 0)) {

Vectors are even better but as mentioned I wasn’t able to get a structure with an int3 or an int2 and an int to work. A structure with float4 or int4 vectors however worked just fine.

typedef struct tag_my_struct {
 float4 v;
 } Row;

How about the use of vload/vstore? For my data I haven’t as of yet seen a noticeable performance improvement using them.  This is on my list to revisit.

The memory architecture of Mali is important to understand and take advantage of.  First access to OpenCL global and local data has the same cost.

Buffers

How one creates memory buffers is important and has a performance impact.

clCreateBuffer(CL_MEM_ALLOC_HOST_PTR)  is the preferred method.

clCreateBuffer(CL_MEM_USE_HOST_PTR)  uses a buffer allocated by malloc. Unfortunately it means that the contents of that malloced region must be copied into storage accessible by Mali.  CL_MEM_ALLOC_HOST_PTR avoid the copy.

On desktop GPUs memory on the graphics card is disjoint from memory that the CPU core(s) have direct access to. As such operations such as

 err = clEnqueueReadBuffer(s->queue, s->results_gpu->roffsetResults ....

make sense. However this is slower with ARM and Mali. Instead

 t = clEnqueueMapBuffer(s->queue, s->results_gpu->roffsetResults ....

is must faster. In my case I started with clEnqueueReadBuffer and via my new found knowledge in the guides moved to clEneueMapBuffer.

Kernel Work Units

How many work units one should use is not always a straight forward value. Gronqvist & Lokhmotov note that the theoretical maximum for Mali is 256 but the actual number is based on how many registers are being used by the OpenCL kernel. Four and under 256 is the right value. Between Four and Eight, 128 and between Eight and Sixteen 64 work units.

For my kernels I noticed that my integer based kernels 64 work units was the right choice. For my kernels that mix integer and floating point 128 was the right setting. I don’t have an explanation for this.

Early Results

Here’s what I’m seeing for results across the 13 queries. CPU times are with SQLite version built with -O2 and running on a Cortex-A15. GPU times are from a Mali T604.

SELECT id, uniformi, normali5 FROM test WHERE uniformi > 60 AND normali5 < 0

CPU sql1 took 43631 microseconds

OpenCL sql1  took 14545 microseconds  (2.99x better or 199% better)


SELECT id, uniformf, normalf5 FROM test WHERE uniformf > 60 AND normalf5 < 0

CPU sql2 took 62785 microseconds

OpenCL sql2 took 7756 microseconds (8.09x better or 709% better)


SELECT id, uniformi, normali5 FROM test WHERE uniformi > -60 AND normali5 < 5

CPU sql3 took 114448 microseconds

OpenCL sql3.cl took 44533 microseconds (2.56x better or 156% better)


SELECT id, uniformf, normalf5 FROM test WHERE uniformf > -60 AND normalf5 < 5

CPU sql4 took 139694 microseconds

OpenCL sql4.cl took 20911 microseconds (6.68x better or 568% better)


SELECT id, normali5, normali20 FROM test WHERE (normali20 + 40) > (uniformi – 10)

CPU sql5 took 138859 microseconds

OpenCL sql5.cl took 48834 microseconds (2.84x better or 184% better)


SELECT id, normalf5, normalf20 FROM test WHERE (normalf20 + 40) > (uniformf – 10)

CPU sql6 took 163830 microseconds

OpenCL sql6.cl took 22712 microseconds (7.21x better or 621% better)


SELECT id, normali5, normali20 FROM test WHERE normali5 * normali20 BETWEEN -5 AND 5

CPU sql7 took 82662 microseconds

OpenCL sql7.cl took 20669 microseconds (3.99x better or 299% better)


SELECT id, normalf5, normalf20 FROM test WHERE normalf5 * normalf20 BETWEEN -5 AND 5

CPU sql8 took 96882 microseconds

OpenCL sql8.cl took 10854 microseconds (8.92x better or 792% better)


SELECT id, uniformi, normali5, normali20 FROM test WHERE NOT uniformi OR NOT normali5 OR NOT normali20

CPU sql9 took 74317 microseconds

OpenCL sql9.cl took 12955 microseconds (5.73x better of 473% better)


SELECT id, uniformf, normalf5, normalf20 FROM test WHERE NOT uniformf OR NOT normalf5 OR NOT normalf20

CPU sql10 took 91617 microseconds

OpenCL sql10.cl took 7524 microseconds (12.17x better or 1117% better)


SELECT SUM(normalf20) FROM test

CPU sql11 took 44995 microseconds

OpenCL sql11 took 2190 microseconds (20.54x or 1954% better)


SELECT AVG(uniformi) FROM test WHERE uniformi > 0

CPU sql12 took 41000 microseconds

OpenCL sql12.cl took 4236 microseconds (9.67x or 867% better)


SELECT MAX(normali5), MIN(normali5) FROM test

CPU sql13 took 52354 microseconds
OpenCL sql13.cl took 4619 microseconds (11.33x or 1034% better)


This gives us a range of improvements between 2.56x and 20.54x better when OpenCL is used for these queries.

Next Steps

I will post the prototype code to git.linaro.org. I plan to do this once I’m able to handle character data. Guessing that should be in about a week. It’ll give me some time to also cut off a useless prehensile tail or two.

From here I want to add character data into the mix and then begin the construction of the general purpose path to generate and execute the resulting OpenCL kernel. I want to gather a wider range of  performance numbers to understand what the minimum number of rows of data needs to be before it makes sense to use OpenCL.

I would like to run this code on a SoC with a 628 Mali  to see how the performance changes.

I would like to run this code on a different GPU such as Qualcomm’s Adreno.

Last  this approach certainly could use Renderscript instead of OpenCL. Obviously it would only work on Android but that’s perfectly fine and I think well worth the time. My “current” stable Android platform is an original Nexus 7 which while it runs KitKat I’m not sure would be the best choice as compared to the current Nexus 7 which has a 400 MHz quad-core Adreno 320.   Time to review the hardware budget.

Advertisements

Late on a Saturday night and I’m working on my monitor tan. It’s spring, can’t be too early to prepare for summer of course!

I’ve taken the following sql queries and run them both with the traditional sqlite c apis as well as with my OpenCL accelerated APIs. These queries are the same that Bakkum et all used in their cuda accelerated sqlite paper.

char *sql11 ="SELECT SUM(normalf20) FROM test";
char *sql12 ="SELECT AVG(uniformi) FROM test WHERE uniformi > 0";
char *sql13 ="SELECT MAX(normali5), MIN(normali5) FROM test";

Straight sqlite with my A15 based Samsung Chromebook yields:

sql11 took 95399 microseconds
sql12 took 86576 microseconds
sql13 took 121898 microseconds

My OpenCL APIs yields the following for the same queries:

OpenCL sql11 took 46098 microseconds
OpenCL sql12 took 55524 microseconds
OpenCL sql13 took 64802 microseconds

The data is the same for both straight C sqlite apis and OpenCL apis 100,000 rows to process from one database with one table. The time measured is the time to perform the query across all selected data and for the end user API to obtain the data. For OpenCL this includes the copying out of the data. For the straight c apis this includes the time accessing the one row.

I’m not applying any sort of statistical process or test to these results. That’ll be a later step to assert a confidence interval based on a distribution of collected results.

All in all I don’t think the results are too bad but I’d like to think OpenCL should be able do better. Time to spend a little time with perf as well as do a little digging to see what might be available from Mali developer to analyze performance on the GPU.

These microbenchmarks are important to me. They give a guide as far as what might be accomplished with a general purpose solution which is yet to be written.  They also are helping me to form opinions about how to best approach it.

People have side projects. This one is mine.

What if you accelerate the popular sqlite database with OpenCL? This is one of the ideas that was floated as part of the GPGPU team to get a feel for what might be accomplished on ARM hardware with a mobile GPU.

In my case I’m using the Mali opencl drivers, running with ubuntu linux on a Samsung Chromebook which includes a dual core A15 and a Mali T604. You can replicate this same setup following these instructions.

At Linaro Connect Asia 2014 as part of the GPGPU session I gave an overview of the effort but I wasn’t able to give any initial performance numbers since my free time is highly variable and Connect arrived before I was quite ready. At the time I was about a week out from being able to run a microbenchmark or two since I was just getting to the step of writing some of the OpenCL.

Before I get to some initial numbers let me review a bit of what I talked about at Connect.

To accelerate sqlite I’ve initially added an api that sits next to the sqlite C api. My API in time should be able to blend right into the sqlite API so that no code changes would be needed by end user applications.  With sqlite you usually have a call sequence something like :

sqlite3_open(databaseName, &db);
c= sqlite3_prepare_v2(db, sql, -1, &selectAll_statement, NULL);
while (sqlite3_step(selectAll_statement) == SQLITE_ROW) {
    sqlite3_column_TYPE(selectAll_statement,0);
}

The prepare call takes sql and converts it to an expression tree that is translated into a bytecode which is run inside of a VM. The virtual machine is really nothing more than an big switch statement and each case handles an op code that the VM operates over. sqlite doesn’t do any sort of JIT to accelerate it’s operation. (I know what you’re thinking, hold that thought.)

The challenge to make a general purpose acceleration is to take the operation of the VM and move that onto the GPU. I see a few ways to accomplish this. In the past work that Peter Bakkum and Kevin Skadron had done they basically moved the implementation of the VM into the GPU using Cuda. This kind of approach really doesn’t work in my opinion for using OpenCL. Instead I’m currently of the opinion that the output of the sql expression tree ought to be a bit more than just VM bytecodes. I do wonder if utilizing llvm couldn’t offer interesting possibilities including SPIR (the Khronos intermediate representation standard for OpenCL) . Further research for sure.

The opencl accelerated API sequence looks like:

opencl_init(s, db);
opencl_prepare_data(s, sql);
opencl_transfer_data(s);
opencl_select(s, sql, 0);
opencl_transfer_results(s);

At this point, what I’ve managed to do is using a 100,000 row database with 7 columns run the same query using the sqlite c interface and my opencl accelerated interface.

With the sqlite c API the query took 420274 microseconds on my chromebook a dual core A15 cpu running at 1.7 Gz.

The OpenCL accelerated API running on the Mali T604 GPU at 533Mhz(?) from the same Chromebook yields 110289 microseconds. This measured time includes both the running of the OpenCL kernel and the data transfer from the result buffer.

These are early results. Many grains of salt should be applied but over all this seems like good results for a mobile GPU.

The last session has pasted. As I write this, it’s sort of situation out of the twilight zone. I’ve managed to break my glasses. I’m fairly near sited but my vision isn’t good enough for my screen to be in focus at an average distance.

The last day of Connect we had two sessions. Friday is a tough day to run a session both on account of  people being tired. Numbers suffer and it seems like we are all subject a -20 IQ modifier to the technical discussion at times.

Friday Sessions

The ION session covered the current work in progress with John Stultz from the Android team and Sumit Semwal tag team presenting.  Since Plumbers there’s been a good amount of activity.  Colin Cross updated this code a fair amount as it was reviewed. He created a number of tests which John Stultz ported outside of Android, a dummy driver has been put together for testing on non ION enabled graphics stacks and the 115+ patch set was pushed up into staging. As of right now these patches build and run on ARM, x86_64 and ARM64. There’s more to do, the team is working to get the tests running in LAVA. There are a number of design issues yet to be worked out on the dma-buf side of things. The needs to be a constraint aware dma-buf allocation helper functions. Dma-buf needs to try and reuse some of the ION code so they are both reusing the same heaps. Then there needs to be a set of functions within dma-buf that will examine heap flags and allocate memory from the correct ION heap. It all boils down to having a more common infrastructure between dma-buf and ION such that the ION and dma-buf interfaces will rest nicely on what is there instead of being two separate and divergent things.

Benjamin Gaignard presented on Wayland / Weston. He reviewed the current status of the future replacement of X,  it’s status on ARM and how people can use it today. He covered current efforts to address some of the design failings of Wayland/Weston that assume all systems have graphics architectures like Intel. The use of hardware compositors over GPUs on ARM shows a disjoint view of the world as compared to intel that just assumes everyone has a GPU and will want to use things like EGLImage. This is a common theme which we must introduce time and time again to various developer communities who have limited ARM experience. At this point the focus Benjamin has been more to try and introduce into Wayland/Weston the ability to take advantage of dma-buf to promote the sharing of buffers over copying. It’s a slow go especially without the user space dma-buf helps which was from the session yesterday. Wayland/Weston is viable for use on ARM. It’s not perfect and we anticipate more work in this space first with dma-buf and then to take advantage of compositing hardware often found on ARM SoCs.

Summary for the week

Media & Lib Team

I’m pleased that the media team was able to get a list formulated of the next media libraries to port and optimize for AARCH64.  We synced with the ARM team which is also working in the area of optimization. This is vital so that we don’t replicated efforts accidentally.

Display Team

Bibhuti and I were able to sit down and discuss the hwcomposer project. We’ve set the milestones and we’ll set the schedule. I think it’s more than fair to say we’ll be showing code at the next Connect. The next step for Mali driver support on the graphics LSK branch includes further boards depending on their kernel status as well as some discussion about the potential to try and formally upstream the Mali kernel driver even in the face of likely community opposition.  We had great discussions with the LHG and no doubts we’ll be working together to support LHG like we do with other groups.

UMM

As discussed above the UMM team is heads down on creating their initial PoC for connecting the heap allocators to provide map-time post attach allocation. This is code in progress and a very important step in knitting the ION and dma-buf worlds closer together.

GPGPU

This was more of a quiet connect for GPGPU since projects are mid stream. GPGPU is more is a “sprint” like mode than a “connect” like mode. We did release the GPGPU whitepaper to the Friends of OCTO.

Thursday featured the UMM user space allocator helper discussion and the GPGPU status talk.

The UMM User Space Allocators discussion was given by Sumit Semwal and Benjamin Gaignard. The problem involves the need from user space to allocate and work with memory for sharing between devices. Consider a video pipeline or a web camera that is rendering to the screen. This work will help achieve a zero copy design without user space having to know hardware details such as memory ranges, and other device constraints.

Gil Pitney and I gave the GPGPU talk which covered the current efforts involving the GPGPU subteam. Gil is working on Shamrock which is the old Clover project evolved. He’s upgraded it to use current top of tree llvm and MCJIT for code gen. There’s still testing to do but these are excellent steps forward as getting off the old JIT was important. Shamrock provides a CPU only OpenCL implementation which is great for those that don’t want to implement their own drivers but still want to provide at least the basic functionality. In addition there will be via Shamrock a driver for TI DSP hardware. This is also quite a great step forward. Via this route, everyone can collaborate on the open source portion which takes care of the base library and this leave just the driver/codegen to be something that needs to be created by the board creator.

The other part of the talk was about accelerating SQLite with OpenCL. There was a past project that accomplished something similar but with CUDA. I’m working on this and it’s quite the enjoyable project. I’m just implementing OpenCL kernels so there is a ways to go.  It will serve as a good reference for what can be accomplished on ARM SoC systems which have OpenCL drivers. We typically don’t have as many shader units as modern desktop PCIe solutions in the intel universe. I do find it encouraging that the SQLite design is quite flexible and fits well with this kind of experiment.

I did also attend the Ne10 and Chromium optimizations for Cocos2D/Cocos2D-HTML5. This are ARM projects. Ne10 is essentially a library that sits above Neon intrinsics to give easier access to that functionality. Cocos is a popular cross platform engine that is particularly popular in the Android world for 2D UIs and game creation. There was some nice optimization work in and around various drawing primitives done by the ARM team for Chromium that end up helping Cocos.

Thursday included the first bit of quiet time I had all week to actually write some code. It didn’t last long but it did feel good as I’m in a very fun portion of implementing the optimized SQLite with OpenCL and it was hard to set that work aside while Connect is on.

Our first Graphics Working Group session of this connect was today. We reviewed the ongoing efforts to optimize various media libraries for AARCH64. James Yu and Ragesh Radhakrishnan talked about their work involving libpng, libjpeg-turbo, libvpx and pixman. With libjpeg-turbo it features a refresh of the android port. This allows libjpeg-turbo to be a drop in replacement for libjpeg which is currently part of AOSP. The performance difference is clear. Libjpeg-turbo contains quite a number of optimization work over the past few years while libjpeg has languished for a variety of unfortunately political reasons. The reasons?  libjpeg-turbo is better than twice as fast as libjpeg.

James talked his libvpx work which of cource includes VP8 and VP9 support. He’s essentially replaced the past hand coded assembler with a version that used neon intrinsics. Comparing hand coded vs neon intrinsics on ARMv7, there’s a bit of a degrade in performance that needs to be looked into. It’s nearly 10%. In prior efforts with libpng, a similar switch yielded statistically no change in performance.

On the good news side, progress continues on porting past optimized libraries so that they are also optimized for AARCH64 in preparation for the arrival of real hardware.

The other important goal of the session was to gather input on the next set of libraries that should be optimized for AARCH64. We’ve a limited amount of time. We’ve a limited amount of hands. We won’t have everything optimized by the time real hardware shows up. We want to optimize first what is viewed as most important.

Unfortunately we didn’t get a lot of direction. There seems to be a feeling that codecs and such that are in android should be considered first priority. Tho there was some rightful dissent on that concept from those interested in traditional linux.

Besides leaning towards android the other concept that seemed to be there was we should give priority for video over audio. This makes sense as while default fallbacks for audio use more CPU, they generally won’t completely consume a CPU with loss of quality as compared to a video codec that will suffer in framework and quality.

libav was discussed as being potentially important to 3rd party application developers. There doesn’t appear to be solid usage numbers that have been identified as of yet as far as how much libav is in use by 3rd party application developers on Android. I would presume there must be some as libav is a good choice for “odd” formats.

In the afternoon we reviewed our internal list of codecs and media libs for porting and optimizing for AARCH64 and with some other attendees from connect we assembled our list for the “next” libs to put time and attention into. We’ll be reviewing this with others first but generally speaking it looks something approx like this:

  1. mpeg4
  2. webrtc (audio portion)
  3. aac  (from AOSP)
  4. flac
  5. vorbis
  6. mp3
  7. h265
  8. speex

libav is a discussion point yet so that might very well find it’s way onto the list.

LCA14 GWG Day 2

Posted: March 4, 2014 in Uncategorized

Tuesday was yet another day with no sessions hosted by the Graphics Working Group. Wednesday, Thursday Friday are our big days. So for us it was a day of our own meetings and attending working group sessions.

I went to the 64 bit toolchain status meeting. It was good to hear that LLVM’s MCJIT is at least able to pass it’s tests on AARCH64. This is important for future versions of OpenCL for instance on AARCH64. I do wish there was a bit more focus on llvm for AARCH64 tho. With significant projects like Chromium seriously considering a move to llvm it seems wise. 

The next session I went to was the one on SQLite optimization. SQLite is database at the heart of and in very common use across Android, iOS, Linux and so on. It’s an important foundation block so it’s optimization can be valuable. Using the cortex strings work and applying that to SQLite on Android yielded the Linaro Android team some significant results. (20-35%)  The Android use case is an interesting one in that the databases tend to be smaller so while speed is one factor to optimize for, space and battery usage are also important. This work came at it only from the speed angle. It’s still a work in progress so will keep in tune.

Related to that as part of our GPGPU efforts I’m in the midst of optimizing SQLite with OpenCL. I’m at the stage of writing the OpenCL kernels so it’s too early to start talking about microbenchmark results. I’m also aiming at a slightly different use case. I’m looking at more sizable databases and more common operations that would be found with use as part of a LAMP or LAMP-like stack. I’ll be talking about this effort as part of the GPGPU session on Thursday.

During the afternoon hacking time, I was in solid meetings all day. This is the life of a tech lead sometimes. While it would be great to be heads down in vi this week, doing so would look out in many wonderful opportunities to talk to many people.

On Tuesday about 1/2 of GWG and the LHG got together and were talking about a variety of topics. Mostly at an architectural level and some of the factors that go into making good technology choices when it comes to video playback and so on. As LHG gets off the ground they have a number of interesting (and fun!) challenges ahead of them.

The Media and Libs subteam got together and we held a project review. This is a perfect activity for connect. It is quite good to perform detailed reviews from time to time. The good news is that the libvpx (VP8 &VP9), libjpeg-turbo and pixman optimization efforts for AARCH64 are coming along quite well with patches flowing upstream. Today (Wednesday) we’ll be meeting to set the next list of libs that we will optimize for AARCH64.