Tracing OpenGL Errors with Mesa

Tracing OpenGL Errors with Mesa

Debugging GL Errors can be a time consuming task sometimes. Usually you need to query the OpenGL state machine with glGetError, which returns just an integer of the latest error.

First of all this requires a switch for checking the type of error, which could look like this:

When you execute this code, the loop will print all errors on the stack. This does not tell you when the error occurred, just that it happened before calling the function.

With Mesa you can do the same by setting an environment variable.

$ export MESA_DEBUG=1

This will give you debug output similar to this:

Mesa: User error: GL_INVALID_VALUE in glClear(0x5f01)

The usual solution is to create a macro function, which prints in which line you executed the query function, and you put a call to it in the end of every code that calls the OpenGL API.

This has to be done on propriatory drivers like NVIDIA’s, since you do not have debug information. A better approach is to get a backtrace to every failing GL call. For this, you need to rebuild your Mesa with debug symbols, or install a debug package provided by your distribution.

To build Mesa with debug symbols you have to set the following compiler options:

export CFLAGS='-O0 -ggdb3'
export CXXFLAGS='-O0 -ggdb3'

On Arch Linux this can be done in the build() function, when building mesa from ABS or mesa-git from AUR.

You will then be able to receive a backtrace with gdb. Do not forget to build your application with debug symbols. For cmake projects, like mesa-demos you can achieve this by doing

$ cmake . -DCMAKE_BUILD_TYPE=Debug

Run the application in GDB

$ gdb ./src/glsl/geom-outlining-150

Break with b on _mesa_error in the gdb command line

(gdb) b _mesa_error
Function "_mesa_error" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (_mesa_error) pending.

Run the program with r and receive a backtrace with bt

(gdb) r
Starting program: mesa-demos/src/glsl/geom-outlining-150 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/".
Breakpoint 1, 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/
(gdb) bt
#0 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/
#1 0x0000000000403555 in Redisplay () at mesa-demos/src/glsl/geom-outlining-150.c:119
#2 0x00007ffff7313741 in fghRedrawWindow () from /usr/lib/
#3 0x00007ffff7313ba8 in ?? () from /usr/lib/
#4 0x00007ffff7315179 in fgEnumWindows () from /usr/lib/
#5 0x00007ffff7313c84 in glutMainLoopEvent () from /usr/lib/
#6 0x00007ffff7313d24 in glutMainLoop () from /usr/lib/
#7 0x0000000000403e41 in main (argc=1, argv=0x7fffffffe558) at mesa-demos/src/glsl/geom-outlining-150.c:392

Now we know that the GL error occurs in line 392 of geo-outlining-150.c.

You get GL errors when using GLEW in core GL contexts, since its calling the deprecated GL_EXTENSIONS enum for glGetString. You can continue debugging with c. If you want to use a modern way to load core context, try gl3w instead of GLEW.

Breakpoint 1, 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/
(gdb) bt
#0 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/
#1 0x00007ffff1fcb7b4 in _mesa_GetString () from /usr/lib/xorg/modules/dri/
#2 0x00007ffff70ba76a in glewInit () from /usr/lib/
#3 0x0000000000403deb in main (argc=1, argv=0x7fffffffe558) at mesa-demos/src/glsl/geom-outlining-150.c:381
(gdb) c
Mesa: User error: GL_INVALID_ENUM in glGetString(GL_EXTENSIONS)

Got Mandelbrot? A Mandelbrot set test pattern for the GStreamer OpenGL Plugins

Got Mandelbrot? A Mandelbrot set test pattern for the GStreamer OpenGL Plugins

I wanted to introduce you into the next station in my Mandelbrot world domination tour. Using C and GLSL this Mandelbrot set visualization can be rendered to a video file.

$ gst-launch-1.0 gltestsrc pattern=13 ! video/x-raw, width=1920, height=1080 ! glimagesink

To run this command you need the patches for gst-plugins-bad I proposed upstream. After they get merged you can try this in your distribution’s GStreamer version.

This is the short patch required for the Mandelbrot set, made possible with the patch adding a generic shader pipeline for gltestsrc patterns.

Screenshot from 2014-08-21 02:24:08

Simple Mandelbrot Set Visualization in GLSL with WebGL

After experimenting with pyopengl and OpenGL 3.3 core pipelines in Python I noticed that Python 3 OpenGL 3 support needs a little more bindings work. Especially the GLEW stuff. Since WebGL does not need such things as bindings, and is suitable of creating a modern OpenGL ES 2 like pipeline, I decided to give it a try for the Mandelbrot Set.

The result are these ~130 Lines of HTML, CSS, JavaScript and GLSL.

You can give the demo a try at JSFiddle.

If you want to see some funky stuff, uncomment line 49.

Simple Mandelbrot Set Visualization in Python 3

Simple Mandelbrot Set Visualization in Python 3

Since I am currently studying for a Analysis exam and have always been fascinated by fractals, I wrote a small Mandelbrot set visualization in Python.

The core formula is the series of z = z^2 + c.

It was implemented with numpy complex and the pillow image library.

To increase performance this could be implemented in GLSL, since the operation is easily separable to multiple threads. For example one thread per pixel, as available in the fragment shader.

HoVR – A Blender Game Engine Demo for the Oculus Rift and the Nintendo Balance Board

HoVR – A Blender Game Engine Demo for the Oculus Rift and the Nintendo Balance Board

HoVR screenshot

HoVR is a virtual reality hover boarding demo. I created it in July 2013 for the CV-Tag, my University’s Demo day. Sadly I didn’t find the time to publish it. Since I got the opportunity to demo it on this year’s gamescom (Halle 10.2, E017), I thought it may be also a good idea to release it.

HoVR is written in Python and uses the Blender Game Engine to render it’s graphics and calculate the physics simulation. It uses the Python bindings I made and published last year for the Oculus Rift. I also made Wii Balance Girl playing HoVRBoard Python bindings. They utilize the c libraries OpenHMD for the rift and WiiC for the board. You can find python-rift and python-balanceboard in my Github, or try the Arch Linux AUR packages.

Furthermore HoVR uses assets and rendering made by Martins Upitis. He released his wonderful Blender Game Engine water demo on his blog.

You can download HoVR from my Github, or install it easily in Arch Linux with the AUR Package.

I could provide bootable USB images, if there is any interest.

Things you need:

  • Oculus Rift (tested with DK1)
  • Wii Balance Board
  • Bluetooth Dongle
  • Arch Linux or other Unix
  • Have a little talent with hacking and stuff, until I create a convenient way of running this

Windows users could try MSYS2, but they would need to port the packages. MacOS wasn’t tested, but should work theoretically.

HoVR setup in my Room

Transforming Video on the GPU

OpenGL is very suitable for calculating transformations like rotation, scale and translation. Since the video will end up on one rectangular plane, the vertex shader only needs to transform 4 vertices (or 5 with GL_TRIANGLE_STRIP) and map the texture to it. This is a piece of cake for the GPU, since it was designed to do that with many many more vertices, so the performance bottleneck will be uploading the video frame into GPU memory and downloading it again.

The transformations

GStreamer already provides some separate plugins that are basically suitable for doing one of these transformations.


videomixer: The videomixer does translation of the video with the xpos and ypos properties.

frei0r-filter-scale0tilt: The frei0r plugin is very slow, but it has the advantage of doing scale and tilt (translate) in one plugin. This is why i used it in my 2011 GSoC. It also provides a “clip” propery for cropping the video.


rotate: The rotate element is able to rotate the video, but it has to be applied after the other transformations, unless you want borders.

Screenshot from 2014-06-16 17:54:44


videoscale: The videoscale element is able to resize the video, but has to be applied after the translation. Additionally it resizes the whole canvas, so it’s also not perfect.

frei0r-filter-scale0tilt: This plugin is able to scale the video, and leave the cansas size as it is. It’s disadvantage is being very slow.

So we have some plugins that do transformation in GStreamer, but you can see that using them together is quite impossible and also slow. But how slow?

Let’s see how the performance of gltransformation compares to the GStreamer CPU transformation plugins.

Benchmark time

All the commands are measured with “time”. The tests were done on the nouveau driver, using MESA as OpenGL implementation. All GPUs should have simmilar results, since not really much is calculated on them. The bottleneck should be the upload.

Pure video generation

gst-launch-1.0 videotestsrc num-buffers=10000 ! fakesink

CPU 3.259s

gst-launch-1.0 gltestsrc num-buffers=10000 ! fakesink

OpenGL 1.168s

Cool the gltestsrc seem to run faster than the classical videotestsrc. But we are not uploading real video to the GPU! This is cheating! Don’t worry, we will do real world tests with files soon.

Rotating the test source

gst-launch-1.0 videotestsrc num-buffers=10000 ! rotate angle=1.1 ! fakesink

CPU 10.158s

gst-launch-1.0 gltestsrc num-buffers=10000 ! gltransformation zrotation=1.1 ! fakesink

OpenGL 4.856s

Oh cool, we’re as twice as fast in OpenGL. This is without uploading the video to the GPU though.

Rotating a video file

In this test we will rotate a HD video file with a duration of 45 seconds. I’m replacing only the sink with fakesink. Note that the CPU rotation needs  videoconverts.

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! videoconvert ! fakesink

CPU 17.121s

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! gltransformation zrotation=1.1 ! fakesink

OpenGL 11.074s

Even with uploading the video to the GPU, we’re still faster!

Doing all 3 operations

Ok, now lets see how we perform in doing translation, scale and rotation. Note that the CPU pipeline does contain the problems described earlier.

gst-launch-1.0 videomixer sink_0::ypos=540 name=mix ! videoconvert ! fakesink filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! videoscale ! video/x-raw, width=150 ! mix.

CPU 17.117s

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! gltransformation zrotation=1.1 xtranslation=2.0 yscale=2.0 ! fakesink

OpenGL 9.465s

No surprise, it’s still faster and even correct.


Let’s be unfair and benchmark the frei0r plugin. There is one advantage, that it can do translation and scale correctly, but rotation can only be applied at the end. So no rotation at different pivot points is possible.

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! frei0r-filter-scale0tilt scale-x=0.9 tilt-x=0.5 ! fakesink

CPU 35.227s

Damn, that is horribly slow.

The gltransformation plugin is up to 3 times faster than that!


The gltransformation plugin does all 3 transformations together in a correct fashion and is fast in addition. Furthermore threedimensional transformations are possible, like rotating around the X axis or translating in Z. If you want, you can even use orthographic projection.

I also want to thank ystreet00 for helping me to get into the world of the GStreamer OpenGL plugins.

To run the test yourself, check out my patch for gst-plugins-bad:

Also don’t forget to use my python testing script:


gltransformation utilizes ebassi’s new graphene library, which implements linear algebra calculations needed for new world OpenGL without the fixed function pipeline.

Alternatives worth mentioning for C++ are QtMatrix4x4 and of course g-truc’s glm. These are not usable with GStreamer, and I was very happy that there was a GLib alternative.

After writing some tests and ebassi’s wonderful and quick help, Graphene was ready for usage with GStreamer!

Implementation in Pitivi

To make this transformation usable in Pitivi, we need some transformation interface. The last one I did was rendered in Cairo. Mathieu managed to get this rendered with the ClutterSink, but using GStreamer OpenGL plugins with the clutter sink is currently impossible. The solution will either be to extend the glvideosink to draw an interface over it or to make the clutter sink working with the OpenGL plugins. But I am rather not a fan of the clutter sink, since it introduced problems to Pitivi.