ERROR: One or more PGP signatures could not be verified!

Since I had to look this up too many times already, I thought I better write a quick blog post on how to resolve this, so I won’t need to dig in this thread anymore.

If you ever encounter the following error during makepkg, most prominent in the cower package, which is a dependency of the popular AUR helper pacaur.

==> Verifying source file signatures with gpg...
 cower-17.tar.gz ... FAILED (unknown public key 1EB2638FF56C0C53)
==> ERROR: One or more PGP signatures could not be verified!
:: failed to verify cower integrity

The solution here is not to add the key in pacman-key, but with gpg, like this

$ gpg --recv-keys 1EB2638FF56C0C53
gpg: key 1EB2638FF56C0C53: public key "Dave Reisner <>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1

Have fun in the AUR and don’t forget to contribute.


Making Viewer UIs for Pitivi

Making Viewer UIs for Pitivi

Being someone who has already experimented with two transformation box approaches for Pitivi in the past, maintainers thought I might be the right person to do a modern one.

Creating a user interface for a video transformation requires three things:

  • The implementation of the transformation
  • A way to draw the widgets over the viewer and
  • Mapping the input to the reverse transformation

The transformation

First of all the implementation of the transformation, which is in our case scaling and translation, is currently done by GES.UriSource, calculated on the CPU. In the first Pitivi transformation box I did in GSoC 2012 this was done by the notorious Frei0r plugins from GStreamer Plugins Bad, which is also a CPU implementation. In the second version this was done on the GPU with the gltransformation element I wrote for GSoC 2014.

A method to draw widgets over the viewer

In Pitivi’s case, the viewer is a GStreamer sink. In all three versions rendering of the overlay widgets was done by Cairo, but it was done differently for all three implementations, since they all used different sinks.



2012: The first one used a hacky solution where the sink and cairo drew in the same Gtk drawing area, acquired for GStreamer with the Gst Overlay API. Many Gtk and GStreamer devs wondered how this worked at all. This and Pitivi switching to more modern sinks was the reason why the first version of the box didn’t stay upstream for long.



2014: Still using Gst Overlay API, but this time with the glimagesink. The cairo widgets are rendered into OpenGL textures and composed in the glimagesink draw callback in the GStreamer GL context. Worked pretty smooth for me, but didn’t provide a fallback solution for users without GL. Clearly an approach for the future, but how about something solid?



2016: Now we have the almighty GtkSink in Pitivi. It is a Gtk widget and overlays can be added via Gtk Overlay. The sink is also exchangeable with GtkGLSink, which uses a GStreamer GL context to display the video texture and also can use GStreamer GL plugins like gltransformation without needing to download the GPU memory with gldownload. Cairo rendering now can be easily added over GStreamer sinks, yay.



Linking the UI with the components doing the transformation

The mapping of the input from the UI to the transformation is clearly dependent on the transformation you are using. In the 2012 version I needed to map the input to frei0r^-1. In 2014 I used an OpenGL Model-View-Projection matrix calculated in Graphene, which could also do rotations and 3D transformations (we have a z-axis, yay).

The 2016 implementation uses the inverse transformation for the GES.UriSource transformation, which is done by the GStreamer elements videomixer and videoscale. Of course things like keeping aspect ratio, maintaining limits and transforming Gtk widget coordinates to the transformation’s coordinates are part of this 3rd ingredient.



The new transformation box fits great with Pitivi by making clips selectable from the viewer, so you can manage multiple overlapping clips quite easily. But the best part of this implementation may be its extensibility. I already made two overlay classes, one for the normal clips which uses a GES.UriSource transformation and one for title clips aka GES.TextSource, which is using different coordinates and different GStreamer plugins. In this fashion other plugins can be written for the Pitivi viewer, for example for 3D transformations with gltransformation. Or you could do crazy stuff like a UI for barrel distortion etc.

Clone the code and contribute! ūüôā


If you have any questions about this ask me. I’m lubosz on Freenode IRC.

Introducing GStreamer VR Plug-ins and SPHVR

Introducing GStreamer VR Plug-ins and SPHVR

This article sums up some VR R&D work I have been doing lately at¬†Collabora, so thanks for making this possible! ūüôā

Previously on GStreamer

Three¬†Years ago in 2013 I released an OpenGL fragment shader you could use with the GstGLShader element to view Side-By-Side stereoscopical video on the Oculus Rift DK1 in GStreamer. It used the headset only as a stereo viewer and didn’t provide any tracking, it was just a quick way to make any use of the DK1 with GStreamer at all. Side-by-side stereoscopic video was becoming very popular, due to “3D” movies and screens gaining popularity. With its¬†1.6 release¬†GStreamer added¬†support for stereoscopic video, I didn’t test Side-By-Side stereo with that though.

Why is planar stereoscopic video not 3D?

Stereoscopic video does not provide¬†the full 3D information, since the perspective is always given for a certain view, or parallax. Mapping the stereo video onto a sphere does not solve this, but at least it stores color information independent of view angle, so it’s way more immersive and gets described as telepresence experience. A¬†better solution for “real 3D” video would be of course capturing a point cloud with as many sensitive sensors as possible, filter it and construct mesh data out of it for rendering, but more on that later.

A brief history of mapping imagery on anything different than a plane

Nowadays mankind projects its imagery mostly onto planes, as seen on most LCD Screens, canvases and Polaroids. Although this seems to be a physical limitation, there are some ways to overcome it, in particular with Curved LCD screens, fancy projector setups rarely seen in art installations and of course most recently: Virtual Reality Head Mounted Displays.

Projecting our images on different shapes than planes in virtual space is not new at all though. Still panoramas have been very commonly projected onto cylinders, not only in modern photo viewer software, but also in monumental paintings like the¬†RacŇāawice Panorama, which is housed inside a cylinder shaped building.



But to store information from each angle in 3D space we need a different geometric shape.

The sphere

Sperical projection is used very commonly for example in Google Street View and of course in VR video.

As¬†we are in 3D, a regular angle wouldn’t be enough to describe all directions on a sphere, since 360¬į can only describe a full circle, a 2D shape. In fact we have 2 angles,¬†őł and¬†ŌÜ, also called inclination and azimuth.



radius r, inclination őł, azimuth ŌÜ


You can calculate Cartesian coordinates from spherical coordinates like this

x=r \, \sin\theta \, \cos\varphi
y=r \, \sin\theta \, \sin\varphi
z=r \, \cos\theta

For describing the angle on a sphere¬†we can use the¬†solid angle¬†ő©, calculated by integrating over two angles¬†őł and¬†ŌÜ.

d\Omega = \sin\theta\,d\theta\,d\varphi
\Omega = \iint\limits_S \sin\theta\,d\theta\,d\varphi

The unit for the solid angle is steradian, where the full sphere is 4ŌÄ sr, hence the hemisphere 2ŌÄ sr.



This is why the term 360¬į video does not suite to describe spherical video, since there are other popular shapes for projecting video having ŌÜ = 360¬į, like¬†cylinders. 180¬į video also uses usually a half cylinder. With¬†video half spheres, or hemispheres, you could for example texture a skydome.

So did you implement this sphere?

In Gst3D, which is a small graphics library currently supporting OpenGL 3+ I am providing a sphere built with one triangle strip, which has an equirectangular mapping of UV coordinates, which you can see in yellow to green. You can switch to the wireframe rendering with Tab in the vrtestsrc element.

The vrtestsrc element

$ gst-launch-1.0 vrtestsrc ! glimagesink

How do you track the HMD?

For my HoVR demo in 2014 i made Python bindings for using OpenHMD in Blender game engine and had a good experience with it. Although its HMD support is very limited, it currently only supports the IMU (Inertial Measurement Unit) of the DK2, it is very lightweight since it uses hidapi directly and requires no daemon. In contrast to that the proprietary Oculus driver OVR is very unstable and unsupported, which I used in my demo HoloChat in 2015.

This is why I decided to use OpenHMD as a minimal approach for initial VR sensor support in GStreamer. For broader headset support, and because I think it will be adapted as a standard, I will implement support for OSVR in the future of gst-plugins-vr.

What if I do not have an HMD?

No problem, you can view spherical videos and photos anyway. Currently you can compile gst-vr without OpenHMD and can view things with an arcball camera, without stereo. So you can still view spherical video projected correctly and navigate with your mouse. This fallback mode would probably be best done during run time.

But for VR you need stereo rendering and barrel distortion, right?

Right, they are the core components required in a VR renderer. Stereo rendering and projection according to IMU sensor happens in vrcompositor, which can also be used without a HMD with mouse controls.

The hmdwarp element

For computing the HMD lens distortion, or barrel distortion, I use a fragment shader based approach. I know that there are better methods for doing this, but this seemed like a simple and quick solution, since it does not really eat up much performance.

Currently the lens attributes are hardcoded for Oculus DK2, but I will soon support more HMDs, in particular the HTC Vive, and everything else that OSVR support could offer.

$ GST_GL_API=opengl gst-launch-1.0 gltestsrc ! hmdwarp ! glimagesink

Screenshot from 2016-07-02 12-52-42

The vrcompositor element

gst-plugins-vr Design (1)

$ gst-launch-1.0 uridecodebin uri=file:///home/bmonkey/Videos/elephants.mp4 ! \
               glupload ! glcolorconvert ! videorate ! vrcompositor ! \
               video/x-raw\(memory:GLMemory\), width=1920, height=1080, framerate=75/1 ! \
               hmdwarp ! gtkglsink

In this example pipeline I use GtkGLSink which works fine, but only provides a refresh rate of 60Hz, which is not really optimal for VR. This restriction may reside inside Gtk or window management, still need to investigate it, since the restriction also appears using the Gst Overlay API with Gtk.

Screenshot from 2016-07-02 12-48-54

Viewing equirectangular projected photos


Screenshot from 2016-07-04 18-16-29

You can just do an image search of equirectangular and will get plenty of images to view. Using imagefreeze in front of vrcompositor makes this possible. Image support is not implemented in SPHVR yet, but you can just run this pipeline:

$ gst-launch-1.0 uridecodebin uri= ! \
    imagefreeze ! glupload ! glcolorconvert ! vrcompositor ! \
    video/x-raw\(memory:GLMemory\), width=1920, height=1080, framerate=75/1 ! \
    hmdwarp ! glimagesink

Multiple outputs?

In most VR applications a second output window is created to spectate the VR experience on the desktop. In SPHVR I use the tee element for creating 2 GL sinks and put them in 2 Gtk windows via the GStreamer Overlay api, since GtkGLSink still seems to have it’s problems with tee.

$ GST_GL_XINITTHREADS=1 \ gst-launch-1.0 filesrc location=~/video.webm ! 
    decodebin ! videoscale ! glupload ! glcolorconvert ! videorate ! 
    vrcompositor ! 
    video/x-raw\(memory:GLMemory\), width=1920, height=1080, framerate=75/1 ! 
    hmdwarp ! tee name=t ! queue ! glimagesink t. ! queue ! glimagesink


Pronounced sphere, SPHVR is a python video player using gst-plugins-vr. Currently it is capable of opening a URL of an equirectangular mapped spherical video.

You need to configure your Oculus DK 2 screen to be to be horizontal, since I do not do a roation in SPHVR yet. Other HMDs also may not require this.

Screenshot from 2016-07-01 16-46-20

SPHVR detects your DK2 display using GnomeDesktop and Gdk if available and opens a full screen VR window on it.

To open a video

$ sphvr file:///home/bmonkey/Videos/elephants.mp4

Sensors for spherical video

Spherical video sensors range from consumer devices like the Ricoh Theta for $300 to the professional Nokia Ozo for $60,000. But in general you can just use 2 wide angle cameras and stitch them together correctly. This functionality is mostly found in photography software like Hugin, but will need to find a place in GStreamer soon. GSoC anyone?

Why is sphere + stereo still meh?

The other difficulty besides stitching in creating spherical video is of course stereoscopy. The parallax being different for every pixel and eye makes it difficult to be losslessly transformed from the sensor into the intermediate format and to the viewer. Nokia’s Ozo records stereo with 8 stereo camera pairs in each direction, adjusted to a horizontal default eye separation assumption for the viewer. This means that rotating your head around the axis you are looking along (for example tilting the head to the right) will still produce a wrong parallax.

John Carmack stated in a tweet that his best prerendered stereo VR experience was with renderings from Octane, a renderer from OTOY, who also created the well known Brigate, a path-tracer with real time capabilities. You can find the stereoscopic cube maps on the internet.

So it is apparently possible to encode correct projection in a prerendered stereoscopic cube map, but I still assume that the stereo quality would be highly isotropic. Especially when translating the viewer position.

With stereoscopic spherical video also no real depth information is stored, but  we could encode our depth information projected spherically around the viewer if you like, so a spherical video + spherical depth texture constructed from whatever sensory, would be more immersive / correct than having 3D information as a plain stereo image. But this solution would lack the ability to move in the room.

I think we should use a better format for storing 3D video.

Room-scale VR Video

If you want to walk around the stuff in your video, maybe one could call this holographic, you need a point cloud with absolute world positions. This of course could be converted into a vertex mesh with algorithms like Marching Tetrahedra, compressed and sent over the network.

Real 3D sensors like laser scanners, or other time-of-flight cameras like the Kinect v2 are a good start. You can of course reconstruct 3D positions from a stereoscopic camera, and calculate a point cloud out of it, but this will also result in a point cloud.

Point clouds?

In a previous post I was describing how to stream point clouds over the network with ROS, which I also did in HoloChat. Porting this functionality to GStreamer was always something that teased me, so I implemented a source for libfreenect2. This work is still pretty unfinished, since I need to implement 16bit float buffers in gst-video to transmit the full depth information from libfreenect2. So the point cloud projection is currently wrong, the color buffer is not currently mapped onto it, also no mesh is currently constructed. The code could also get some performance improving attention as well, but here are my first results.

The freenect2src and pointcloudbuilder elements.

gst-plugins-vr Design

Show Kinectv2 infrared image

$ gst-launch-1.0 freenect2src sourcetype=2 ! glimagesink

or color image

$ gst-launch-1.0 freenect2src sourcetype=1 ! glimagesink

Screenshot from 2016-07-04 14-48-23

View a point cloud from the Kinect

$ gst-launch-1.0 freenect2src sourcetype=0 ! glupload ! glcolorconvert ! \
   pointcloudbuilder ! video/x-raw\(memory:GLMemory\), width=1920, height=1080 ! \

Screenshot from 2016-07-04 14-50-53

Since the point cloud is also a Gst3D scene, it can be already viewed with a HMD, and since it’s part of GStreamer, it can be transmitted over the network for telepresence, but there is currently no example doing this yet. More to see in the future.


You can find the source of gst-plugins-vr on Github. An Arch Linux package is available on the AUR. In the future I plan distribution via flatpak.


In the future I plan to implement more projections, for example 180¬į / half cylinder stereo video and stereoscopic equirectangular spherical video. OSVR support, improving point cloud quality and point cloud to mesh construction via marching cubes or similar are also possible things to do. If you are interested to contribute, then feel free to clone.

Impressions of SVVR 2016

Impressions of SVVR 2016

I was able to attend SVVR 2016 last week, where I experienced many insightful VR impressions and saw where the industry is currently heading to. I also want to thank Collabora for giving me the possibility to do this. Note that the opinions in this article are my own and not Collabora’s.



Since I was always enthusiastic about voxel graphics, I needed to check out SculptrVR’s HTC Vive demo. It utilizes the Vive’s SteamVR controller and the Lighthouse full room tracking system to achieve an unique minecraft-esque modeling experience. The company is a startup dedicated to this one product. The editor is currently capable of creating voxels with different cube sizes and colors, and more importantly to destroy them dynamically with rockets. The data structure used is a Sparse¬†Voxel Octree implemented in C++ on top of the Unreal 4 engine. It is capable of export the models to an OBJ vertex mesh. Voxel format export like MagicaVoxel is not yet supported. The prototype was implemented in the Unity engine with Leap Motion input and Oculus DK2 HMD support, but the developer dropped it in favour of Vive and Unreal 4, which gave him more solid tracking and rendering. Product development will continue in the direction of social model sharing and game support. Their software is available on Steam for 20$.

Whirlwind VR

One of the most curious hardware accessories was presented by Whirlwind VR, which is basically a fan with an USB connector and Unity engine plugin. It obviously adds immersion to demos involving rapid movement in an open air vehicle. Other cases utilize the fan’s heating system in order to simulate a dragons breathing fire into your face. A wide spread market is questionable for this product, but I encourage to explore every bit of uncaptured human sense left.



The creators of the first consumer available VR controller Razor Hydra were presenting their new full room body scale tracking system STEM, including VR controllers for your hands and 3 boxes fot feet and head. Targeted to Oculus Rift CV1 customers, which do not receive this functionality out of the box, in contrast to the HTC Vive. The demo setup was a fun 2-player medieval bow shooting game, created with the Unreal 4 engine. One PC was using the standard Vive Lighthouse tracking, the other Oculus CV1 with Sixense’s product prototype. Tracking results were very comparable, where I would prefer the Vive due to the controller’s more finished user experience and feel. Even more if you think about the $995 price tag for the full 5 tracker system, compared to the HTC Vive’s $300 difference to the Oculus CV1.


A personally very interesting demo was presented by Paris located company mimesys, which specializes in telepresence, or “holographic telecommunication”. They used a Kinect v2 for capturing a point cloud, reconstructing a polygon mesh and sending it compressed over the internet. Which is comparable to ideas in my prior work Holochat. In their live demo you were able to use the SteamVR controller to draw in the air and ‚ÄúHoloskype‚ÄĚ with their colleague being live in Paris. The quality of mesh and texture streaming was in a very early stage from my point of view, knowing there is more potential in a Kinect v2 point cloud. Overall the network latency of the mesh was pretty high, but unnoticable since voice chat was the primary communication method (which was achieved over Skype btw). The company’s product is a Unity engine plugin, implementing vertex and texture transfers over the internet. The usage of video codecs for textures would¬†improve this kind of data transfer, which was not implemented as far as I noticed.

Tactical Haptics

A very well researched haptic feedback controller prototype was presented by Californian company Tactical Haptics. You can notice the academic background of mechanical engineering and haptics when you talk to professor and founder William Provancher. He knew numbers like 1000Hz, which is required for haptic feedback to trick the human skin for being real time, in contrast to only 60Hz for the human eye. Their physics engine running only at ~100Hz (don’t know the exact number anymore) was more than sufficient for their first class haptics system. With the interactions in the demo being rather rough, like juggling cubes with Jedi powers or shooting drones with a bow, the latency was more than appropriate to have an immersive haptic experience unique on the expo. Their product¬†Reactive Grip¬†achieves to replicate the “skin sensations of actually holding an object” and has an imaginable wide spread use in future VR controllers for action experiences and workouts.


Japanese hardware vendor Ricoh was presenting their already marketed consumer targeted spherical camera. Being able to seamlessly stitch it’s two wide angle sensors (> hemispherical) on the hardware and sending it to the phone providing a very user friendly interface. A client agnostic REST like interface is provided to control the camera with the Android and iOS clients. Streaming video is also possible, but requires a PC with their proprietary driver for Windows and MacOS to be stitched in real time. The camera costs only about 400 bucks and will very soon flood the internet with many amateur spherical photos and videos. Spherical video and audio was a big topic on the expo, but I have a problem with the marketing term for it being 360¬į video, since degrees are for 2D angles, and in 3D we deal with solid angles. So please call it 4ŌÄ Steradian Video¬†from now on if you address the full sphere, or 2ŌÄ sr¬†for the hemisphere.

Nokia Ozo

Nokia has stopped making unpopular phones and started targeting the professional spherical video artist with their 8 sensor spherical stereoscopic camera available for affordable 60 grand. The dual sensor setup in every direction, separated by average eye distance was told to provide perfect conditions for stereo capturing. They also provide the editing software to be used with it. It seemed that for editing purposes the raw sensor data is stored in 2×4 circular tiles on a planar video. The video can surely be exported into 2 (because we have stereo) spheres with the commonly used equirectangular mapping on a plane, which is more storage efficient, since we do not have tons of black borders. Their live demo where you could view the “live” camera output with a DK2 was rather disappointing, because of a¬†latency of 3s (yes, full seconds) and very noticable¬†seams. Their software does not target live video processing yet, but there wasn’t a finished rendering available either. The camera looks like a cute android though.


High Fidelity

San Francisco based future driven company¬†High Fidelity¬†wants to build the software for VR what the apache server is for web. In the keynote founder Philip Rosedale talked about the “Hyperlink for VR”. They provide an in house game engine with multiplayer whiteboard drawing and voice chat support. The client is open source software, as is the server, and meant to run on Linux, Windows and MacOS. Telepresence is a very important topic for VR, but High Fidelity yet lacks the possibility of including depth sensors like Kinect or spherical cameras into their world and represents humans by virtual models, which directly brings you¬†in the uncanny valley. Especially when skeleton animation is buggy and your model is doing unhealthy looking yoga merged into the floor. Great stuff though, looking forward to build it for myself and fix some Linux client issues ūüôā


Razor was presenting their open source middleware which wraps available drivers into their signal processing framework and does things like sensor fusion using available computer vision algorithms like the ones in OpenCV. They also provide their OSVR branded headset, marketed as Hacker Development Kit, which has the freedom of having a replaceable display and other repairable components. The headset does 1920×1080@60Hz, which is¬†a slightly worse frame rate than achieved by the Oculus DK2. The most remarkable factor for this headset was the visible lack of screen door effect at this resolution, which was achieved by a physical distortion filter on the display. If you don’t own a DK2 and don’t have the ‚ā¨ for a Vive, the 300$ OSVR is a very solid option. Also using OSVR as software will make supporting not only headset hardware easier, since it¬†also provides support for controllers like SteamVR and tracking systems like Leap Motion. You only need to implement OSVR once, instead of wrapping all HMD and input APIs in your application. Valve’s OpenVR is also an attempt to do that, but lacked presence at the conference.



Graphical horsepower could be experienced in both NVIDIA demos, running on current high end NVIDIA GTX980 cards. I was a little disappointed they did not bring one of their yet to be released Pascal architecture cards though. NVIDIA had both high end consumer headsets Oculus CV1 and HTC Vive on its show floor. The Oculus demo was gaming oriented, and since the Vive is more capable of VR productivity demos, due the full room tracking and use of a VR controller, it was used to do more creative things.

Oculus CV1 + Eve: Valkyrie

The first demo was showcasing the Oculus CV1 with space ship shooter Eve: Valkyrie. The rendering was smooth due to the CV1’s 90Hz frequency and screen door effect was also eliminated by the HMD’s 2160×1200 resolution. Experienced VR users will very quickly note the small tracking area, being basically limited to a chair in front of a desk, in contrast to full room tracking with HTC Vive. The user experience is also very traditional, only using a Xbox game pad. With the lack of an VR controller, users very unintuitively experience a lack of their hands in VR and cannot interact in¬†3D space. Hands are much better for this than an analogue stick, as we will see further below in this¬†article.


Classical game pads also have the problem of changing button order for every manufacturer, which makes it very complicated for experienced gamers to know which button is accept and which one is back. This issue also guided me in Eve: Valkyrie’s payment acceptance screen, which I needed to be guided out of by the booth exhibitor.

HTC Vive + Google Tilt Brush

One of the most influential VR experiences for me was Google’s Tilt Brush on the HTC Vive, which is 2160×1200@90Hz as well btw. The ability to draw freely in 3D space with full room tracking and the Vive’s immersive display¬†capabilities provides a very unique experience which feels like a new medium for artists. The user interface is very simple, having a palette on the left controller and a brush on the right. Of course you can switch hands easily, if you are left handed. This natural input and camera movement enabled the user to be creative in 3D space without the steep learning curve of contemporary “2D interface” CAD software. The creative process of expression possible with Tilt Brush is a good reason itself for getting a HTC Vive for home already. Looking forward to stuff artists can do now.

The other demo on the productivity booth was a point cloud of NVIDIA’s new headquarter‚Äôs construction site, recorded by drones with depth sensors. The scene’s resolution and rendering was not remarkable, but yet fun to navigate in with SteamVR controllers. I can definitely see the application of architects and construction engineers planning their stuff in VR.



Another one of the top three demos I was able to experience was presented by #ProjectAlice under the name Follow the White Rabbit. They are utilizing Noitom’s tracking system to achieve a remarkable multiplayer VR demo, where real objects like garbage cans are perfectly tracked into the virtual world and can interact with virtual objects. They were using regular Wiimote controllers with plastic markers to showcase the potential of their own tracking. Their demo emphasizes how important interaction is in the virtual world and how natural it needs to feel. The scale of the tracking system is rather for public installations that home environments, but I would love to see more of real / virtual world interaction in consumer products, which can also be achieved with consumer trackers like HTC Vive’s Lighthouse. Also note the hygienic face masks they’ve offered for public HMDs. They were the ninjas of VR.





Keynote Speakers showed their current products and market visions. AltspaceVR, NVIDIA, Nokia, High Fidelity and SpaceVR gave presentations. SpaceVR will launch a spherical camera into space, which you can stream from home onto your headset. Road to VR editor Benjamin Lang gave his insights about the so far development of the industry and his 100 year forecast to achieve perfect immersion. I think it we will be there in <30.
Palmer Lucky was also there to drop some game title names from the Oculus store and quickly ran out of the expo terrain after his session, to avoid much public interaction.IMG_20160428_100038

Light Fields 101

Compared to conventional CMOS sensors, light field cameras like Lytro are able to receive photon rays from multiple directions and offer a data set where things like focusing can be done in post production or in live VR environments. Ryan Damm showed insights into his understanding of and research in Light Field technology, where according to him many things happen secretively and mentioned companies like Magic Leap, which are still in the process of developing their product.

How Eye-Interaction Technology Will Transform the VR Experience

Jim Marggraff presented his long professional experience with eye tracking and how necessary it is in natural interfaces. He showed a whack-a-mole on Oculus DK2 where head tracking and eye tracking were compared. A random person from the audience obviously could select the moles faster with eye tracking than by moving his head. He also showed a full featured eye tracking operating system interface where everything could be done with just your eyes, from buying something on Amazon to chatting. Also password authentication is drastically simplified with eye tracking, since you get free retina recognition with it. I think eye tracking will be as essential as hand tracking in future VR experiences, since human interaction and natural input are the next important steps that need to achieve perfection, after we have 4k@144Hz HMDs.

WebVR with Mozilla’s A-Frame

Not only was Ben Nolan the right guy to look for if you’re into authentic American craft beer bars in San Jose, but also a JavaScript developer with enthusiasm for VR in the web browser. He showed A-Frame, an open source JavaScript engine with HMD support which is already available in the popular browser’s nightly builds. The engine contains a XML based model format and scene graph, physics and shading extensibility utilizing the WebGL standard. The big benefit of having VR support in the browser is clearly ease of distribution and quick content generation. He pointed out that a minimal A-Frame project is as small as 1kB, where a Unity web build is at least ~0.5MB. Type into your Android phone and try it for yourself.IMG_20160429_110200

Apollo 11 VR ‚Äď (Designed to Demo, Developed to Educate)

David Whelan of Ireland based VR Education Ltd pointed out how important VR will be in future education. He had a very good point that experiencing tends to have a higher impact in memorizing facts than sitting in a classroom. He showed the development process of their Kickstarter funded Apollo 11 VR demo, and their current work Lecture VR, which both are available on Steam already.

The Missing Link: How Natural Input Creates True Immersion for VR/AR

One of the most amazing on spot talks was given by Leap Motion co-founder David Holz, who pointed out the necessity of natural input in VR. Leap Motion is a ~100$ high FOV infrared camera, which is already available since 2013. But their major technological achievement is observable since only this year, after¬†they released the second iteration of their tracking software, code named Orion. Holz showcased the stability and precision of their hand tracking on stage, which was quite remarkable. But he got the audience when he showed their current Blocks¬†demo, where objects can be manipulated with a physically plausible interaction engine. Natural gestures are used to create objects, grab them and throw them around. If you didn’t see the demo try it at home, you just need a DK2 and a Leap Motion sensor. It feels to be a generation further than the ordinary VR demo and points out how much immersion is gained by seeing your hands and even using them for interaction. He also showed user interface designs for VR, which are projected onto the body and in the room. Conventional 2D interfaces where we need to stack windows and tabs seem very primitive in comparison.¬†He also talked about how VR/AR interfaces will eliminate the necessity of having a work desk and chair, since all meetings and work can be done in the forest or a lounge.


The expo pointed out how important novel human interaction methods are in VR, since it is obvious to replace the keyboard, mouse and game pad with natural body movement tracking as it was to replace conventional displays with HMDs.

A big part the industry also focuses on spherical video, since it’s currently the quickest method of bringing the real world into VR.

All in all, exciting stuff, thanks for reading.

TL;DR Get your hands on a HTC Vive + Tilt Brush and Leap Motion Blocks


Holochat – An Holographic Telecommunication Demo

Holochat – An Holographic Telecommunication Demo

In¬†July 2015 I did a VR demo on the CV Tag at the University of Koblenz,¬†which uses two Arch Linux PCs with two Oculus Rift DK2s and two Kinect v2s. It utilizes ROS‘s capabilities to stream ROS topics over the network.

To run Holochat you need to setup libfreenect2 and ROS first, explained in Viewing Kinect v2 Point Clouds with ROS in Arch Linux.

ROS Topics

You can list the ROS topics with following command

rostopic list

You will see that kinect2_bridge provides topics in 3 different resolutions: [ hd | qhd | sd ], and also different formats and the option for compression. The IR image is only available in SD, due to the sensor size.

To test their bandwidth you can use

$ rostopic bw /kinect2/hd/image_color
subscribed to [/kinect2/hd/image_color]
average: 101.23MB/s
 mean: 6.22MB min: 6.22MB max: 6.22MB window: 14
average: 96.92MB/s
 mean: 6.22MB min: 6.22MB max: 6.22MB window: 29

You will notice that the uncompressed color image is ~101.23MB/s , the calculated uncompressed depth image is ~125.66MB/s and the uncompressed IR image is only 12.75MB/s.

By default kinect2_viewer accesses /kinect2/qhd/image_color_rect and /kinect2/qhd/image_depth_rect. The IR mode has a lower bandwidth, since sd/image_ir_rect and sd/image_depth_rect combined require only ~28MB/s, and compressed ~17MB/s, which should be achievable over 100MBit/s LAN.

The depth buffer + IR point cloud will look like this


You can set topic options as explained on the help page

$ rosrun kinect2_viewer kinect2_viewer -h
/opt/ros/jade/lib/kinect2_viewer/kinect2_viewer [options]
 name: 'any string' equals to the kinect2_bridge topic base name
 mode: 'qhd', 'hd', 'sd' or 'ir'
 visualization: 'image', 'cloud' or 'both'
 'compressed' use compressed instead of raw topics
 'approx' use approximate time synchronization

Lets add some VR support to the viewer

In order to make this holographic I patched the kinect2_viewer with OculusSDK support. Since the SDK is no longer maintained, I recommend a modified version from jherico, which can be found in oculus-rift-sdk-jherico-git on the AUR.

At the time I wrote the patches¬†OpenHMD was lacking head tracking support and libraries like Valve’s OpenVR and Razer’s OSVR were not around. They still are not really usable with the DK2 at the time I am writing this article.

My patched iai-kinect branch can be found on GitHub. I made an AUR package with the viewer and my VR patches.

$ pacaur -S ros-jade-kinect2-viewer-oculus

To run the patched viewer you need to have ovrd running. If this does not work out, try killing it and running it as root. It auto starts with your session and does not exit if it lacks the udev rights. Make sure you have oculus-udev installed.

Make sure your headset is available in

$ OculusConfigUtil

When the viewer starts, you need to manually maximize it on the VR headset ūüėČ The user base of the demo (me) thought this was sufficient.

ROS Networking

ROS prints host name and port when you start roscore

$ roscore
started roslaunch server http://kinecthost:43272/

If you have the above setup on two machines, you can run kinect2_bridge on the host as usual. On the client you need to provide the host’s¬†ROS_MASTER_URI¬†when running the viewer.

$ ROS_MASTER_URI=http://kinecthost:11311/ rosrun kinect2_viewer kinect2_viewer ir

Adding a simple audio stream using GStreamer

To run the following pipelines, you need to install the¬†GStreamer Good Plugins. The pipeline use your PulseAudio default devices for recording and playback. You can set them for example in GNOME’s audio settings.

On the host run:

gst-launch-1.0 udpsrc port=5000 \
                      caps='application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)2, payload=(int)256' ! \
    rtpL16depay ! \
    pulsesink sync=false

On the client you can run following pipeline for playback. You need to change soundhost to the host name / IP of the sound host.

gst-launch-1.0 pulsesrc ! audioconvert ! \
    audio/x-raw, channels=1, rate=44100 ! \
    rtpL16pay ! udpsink host=soundhost port=5000

Putting everything together

Now you just need 2 DK2s, 2 Kinect v2s and to do this setup on 2 machines so you can have holographic video conferences. Have fun with that.


Viewing Kinect v2 Point Clouds with ROS in Arch Linux

Viewing Kinect v2 Point Clouds with ROS in Arch Linux

Using libfreenect2 and the IAI Kinect v2 ROS modules you can easily watch a point cloud generated with your Kinect v2 sensor.

You need a Kinect for Windows v2, which is a Xbox One Kinect with an adapter to standard USB 3 from Microsoft’s proprietary connecor¬†and hence, a PC with an USB 3 port.

$ pacaur -S ros-jade-kinect2

Using my ros-jade-kinect2 AUR package, you can install all required dependencies, such as a ton of ROS packages, Point Cloud Library and libfreenect2, which are all available in the Arch User Repository.

Testing libfreenect2

After installing libreenect2 you can test your Kinect v2 with the following command

$ Pronect

If everything runs fine you will get an image like this from Pronectpronect

In the image above you can see the unprocessed infrared image (top left), the color sensor image mapped to the calculated depth (top right) , the unprocessed color image (bottom left) and the calculated depth image (bottom right).

By default Pronect uses an OpenGL to generate¬†the depth image. To test libfreenect2’s different DepthPacketProcessor backends you can do

$ Protonect cl

Possible backends are: [gl | cl | cuda | cpu]

A possible error can be insufficient permissions for the USB device:

[Error] [Freenect2Impl] failed to open Kinect v2: @2:9 LIBUSB_ERROR_ACCESS Access denied (insufficient permissions)

libfreenect provides a¬†/usr/lib/udev/rules.d/90-kinect2.rules file which gives the Kinect the udev tag uaccess¬†to provide user access. The error is generated when this did not work out. It¬†can be fixed with a relogin. udevadm control -R didn’t seem to work. Running Pronect with sudo will also help temporally.

Using ROS

You can enter your ROS environment with

$ source /opt/ros/jade/setup.bash

You probably should create an alias for this environment in your shell config.

Now you can launch the roscore and leave it in a separate shell.

$ roscore

Install ros-jade-rosbash for rosrun. You now can list the options of the kinect2_bridge module.

$ rosrun kinect2_bridge kinect2_bridge -h

The default options for kinect2_bridge are OpenCL registration and OpenGL depth method. You can start it like this

$ rosrun kinect2_bridge kinect2_bridge

Possible Problems

This fails for me on NVIDIA with

[ERROR] [Kinect2Bridge::start] Initialization failed!

This is due to the OpenCL registration method failing to initialize the OpenCL device.

A different error occurs with the beignet OpenCL implementation for Intel. It seems the OpenCL registration method does not support beignet’s shader compiler.

[ INFO] [DepthRegistrationOpenCL::init] devices:
[ INFO] [DepthRegistrationOpenCL::init] 0: Intel(R) HD Graphics Haswell Ultrabook GT3 Mobile (GPU)[Intel]
[ INFO] [DepthRegistrationOpenCL::init] selected device: Intel(R) HD Graphics Haswell Ultrabook GT3 Mobile (GPU)[Intel]
[ERROR] [DepthRegistrationOpenCL::init] [depth_registration_opencl.cpp](216) data-> failed: -11
[ERROR] [DepthRegistrationOpenCL::init] failed to build program: -11
[ERROR] [DepthRegistrationOpenCL::init] Build Status: -2
[ERROR] [DepthRegistrationOpenCL::init] Build Options:
[ERROR] [DepthRegistrationOpenCL::init] Build Log: error: call to 'sqrt' is ambiguous

This can be solved by using the CPU registration method

$ rosrun kinect2_bridge kinect2_bridge _reg_method:=cpu

The OpenCL depth method with beignet produces a black screen. This can be solved by using the OpenGL depth method, it works fine with mesa.

$ rosrun kinect2_bridge kinect2_bridge _reg_method:=cpu _depth_method:=opengl

Viewing the Point Cloud

Finally, open a shell in the ros environment and launch the viewer:

$ rosrun kinect2_viewer kinect2_viewer

This will show you a point cloud with the color sensor mapped to the depth buffer. It will look slightly shifted. You need to calibrate your Kinect in order to have a better mapping.


You can also run the viewer in ir mode to see only the depth sensor.

$ rosrun kinect2_viewer kinect2_viewer ir


Congratulations, you can do Point Cloud Selfies now


For more information about ROS Jade on Arch Linux, see

Tracing OpenGL Errors with Mesa

Tracing OpenGL Errors with Mesa

Debugging GL Errors can be a time consuming task sometimes. Usually you need to query the OpenGL state machine with glGetError, which returns just an integer of the latest error.

First of all this requires a switch for checking the type of error, which could look like this:

GLenum err(glGetError());
while (err != GL_NO_ERROR) {
string error;
switch (err) {
error = "INVALID_ENUM";
error = "INVALID_VALUE";
error = "OUT_OF_MEMORY";
error = "Unknown error";
std::cout << "GL_" << error;
err = glGetError();

view raw


hosted with ❤ by GitHub

When you execute this code, the loop will print all errors on the stack. This does not tell you when the error occurred, just that it happened before calling the function.

With Mesa you can do the same by setting an environment variable.

$ export MESA_DEBUG=1

This will give you debug output similar to this:

Mesa: User error: GL_INVALID_VALUE in glClear(0x5f01)

The usual solution is to create a macro function, which prints in which line you executed the query function, and you put a call to it in the end of every code that calls the OpenGL API.

#define glError checkGlError(__FILE__,__LINE__)
void checkGlError(const char* file, int line) {
GLenum err(glGetError());
while (err != GL_NO_ERROR) {
string error;
switch (err) {
error = "INVALID_ENUM";
error = "INVALID_VALUE";
error = "OUT_OF_MEMORY";
error = "Unknown error";
std::cout << file << " " << line << ": GL_" << error;
err = glGetError();
void critcalOpenGLFunction() {

view raw


hosted with ❤ by GitHub

This has to be done on propriatory drivers like NVIDIA’s, since you do not have debug information. A better approach is to get a backtrace to every failing GL call. For this, you need to rebuild your Mesa with debug symbols, or install a debug package provided by your distribution.

To build Mesa with debug symbols you have to set the following compiler options:

export CFLAGS='-O0 -ggdb3'
export CXXFLAGS='-O0 -ggdb3'

On Arch Linux this can be done in the build() function, when building mesa from ABS or mesa-git from AUR.

You will then be able to receive a backtrace with gdb. Do not forget to build your application with debug symbols. For cmake projects, like mesa-demos you can achieve this by doing

$ cmake . -DCMAKE_BUILD_TYPE=Debug

Run the application in GDB

$ gdb ./src/glsl/geom-outlining-150

Break with b on _mesa_error in the gdb command line

(gdb) b _mesa_error
Function "_mesa_error" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (_mesa_error) pending.

Run the program with r and receive a backtrace with bt

(gdb) r
Starting program: mesa-demos/src/glsl/geom-outlining-150 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/".
Breakpoint 1, 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/
(gdb) bt
#0 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/
#1 0x0000000000403555 in Redisplay () at mesa-demos/src/glsl/geom-outlining-150.c:119
#2 0x00007ffff7313741 in fghRedrawWindow () from /usr/lib/
#3 0x00007ffff7313ba8 in ?? () from /usr/lib/
#4 0x00007ffff7315179 in fgEnumWindows () from /usr/lib/
#5 0x00007ffff7313c84 in glutMainLoopEvent () from /usr/lib/
#6 0x00007ffff7313d24 in glutMainLoop () from /usr/lib/
#7 0x0000000000403e41 in main (argc=1, argv=0x7fffffffe558) at mesa-demos/src/glsl/geom-outlining-150.c:392

Now we know that the GL error occurs in line 392 of geo-outlining-150.c.

You get GL errors when using GLEW in core GL contexts, since its calling the deprecated GL_EXTENSIONS enum for glGetString. You can continue debugging with c. If you want to use a modern way to load core context, try gl3w instead of GLEW.

Breakpoint 1, 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/
(gdb) bt
#0 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/
#1 0x00007ffff1fcb7b4 in _mesa_GetString () from /usr/lib/xorg/modules/dri/
#2 0x00007ffff70ba76a in glewInit () from /usr/lib/
#3 0x0000000000403deb in main (argc=1, argv=0x7fffffffe558) at mesa-demos/src/glsl/geom-outlining-150.c:381
(gdb) c
Mesa: User error: GL_INVALID_ENUM in glGetString(GL_EXTENSIONS)

Got Mandelbrot? A Mandelbrot set test pattern for the GStreamer OpenGL Plugins

Got Mandelbrot? A Mandelbrot set test pattern for the GStreamer OpenGL Plugins

I wanted to introduce you into the next station in my Mandelbrot world domination tour. Using C and GLSL this Mandelbrot set visualization can be rendered to a video file.

$ gst-launch-1.0 gltestsrc pattern=13 ! video/x-raw, width=1920, height=1080 ! glimagesink

To run this command you need the patches for gst-plugins-bad¬†I proposed upstream. After they get merged you can try this in your distribution’s GStreamer version.

This is the short patch required for the Mandelbrot set, made possible with the patch adding a generic shader pipeline for gltestsrc patterns.

Screenshot from 2014-08-21 02:24:08

Simple Mandelbrot Set Visualization in GLSL with WebGL

After experimenting with pyopengl and OpenGL 3.3 core pipelines in Python I noticed that Python 3 OpenGL 3 support needs a little more bindings work. Especially the GLEW stuff. Since WebGL does not need such things as bindings, and is suitable of creating a modern OpenGL ES 2 like pipeline, I decided to give it a try for the Mandelbrot Set.

The result are these ~130 Lines of HTML, CSS, JavaScript and GLSL.

<title>Mandelbrot WebGL</title>
body, canvas {
margin: 0px;
width: 100%;
height: 100%;
overflow: hidden;
<script src=""></script>
<script src=""> </script>
<script src=""></script>
<script id="vshader" type="x-shader/x-vertex">
uniform mat4 projection_matrix;
attribute vec4 position;
varying vec2 fractal_position;
void main() {
gl_Position = projection_matrix * position;
fractal_position = vec2(-position.y, position.x);
fractal_position -= vec2(0.50, 0);
fractal_position *= 1.5;
<script id="fshader" type="x-shader/x-fragment">
precision highp float;
uniform float time;
varying vec2 fractal_position;
const vec4 K = vec4(1.0, 0.66, 0.33, 3.0);
vec4 hsv_to_rgb(float hue, float saturation, float value) {
vec4 p = abs(fract(vec4(hue) + K) * 6.0 - K.wwww);
return vec4(value * mix(, clamp( -, 0.0, 1.0), saturation), 1.0);
vec4 i_to_rgb(int i) {
float hue = float(i) / 100.0 + sin(time);
return hsv_to_rgb(hue, 0.5, 0.8);
vec2 pow_2_complex(vec2 c) {
//return vec2(c.x*c.x - c.y*c.y, sin(time) * 2.0 * c.x * c.y);
return vec2(c.x*c.x - c.y*c.y, 2.0 * c.x * c.y);
vec2 mandelbrot(vec2 c, vec2 c0) {
return pow_2_complex(c) + c0;
vec4 iterate_pixel(vec2 position) {
vec2 c = vec2(0);
for (int i=0; i < 100; i++) {
if (c.x*c.x + c.y*c.y > 2.0*2.0) {
return i_to_rgb(i);
c = mandelbrot(c, position);
return vec4(0, 0, 0, 1);
void main() {
gl_FragColor = iterate_pixel(fractal_position);
var g = {};
var gl;
var canvas;
function init() {
gl = initWebGL("mandelbrot");
if (!gl) return;
g.program = simpleSetup(
"vshader", "fshader",
[ 0, 0, 0, 1 ], 10000); = makeBox(gl);
g.projection_matrix_loc =
gl.getUniformLocation(g.program, "projection_matrix");
gl.vertexAttribPointer(0, 3, gl.FLOAT, false, 0, 0);
canvas = document.getElementById("mandelbrot");
g.starttime =;
window.onresize = reshape;
function reshape() {
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
gl.viewport(0, 0, canvas.clientWidth, canvas.clientHeight);
g.projection_matrix = new J3DIMatrix4();
var aspect = canvas.clientWidth / canvas.clientHeight;
g.projection_matrix.ortho(-1.0, 1.0,
-1.0 / aspect, 1.0 / aspect,
0.1, 1000.0);
function draw() {
"time"), ( - g.starttime) / 5000.0);
g.projection_matrix.setUniform(gl, g.projection_matrix_loc, false);
gl.drawElements(gl.TRIANGLES,, gl.UNSIGNED_BYTE, 0);
window.requestAnimFrame(draw, canvas);
<body onload="init()">
<canvas id="mandelbrot"></canvas>
view raw mandelbrot.html hosted with ❤ by GitHub

You can give the demo a try at JSFiddle.

If you want to see some funky stuff, uncomment line 49.

Simple Mandelbrot Set Visualization in Python 3

Simple Mandelbrot Set Visualization in Python 3

Since I am currently studying for a Analysis exam and have always been fascinated by fractals, I wrote a small Mandelbrot set visualization in Python.

The core formula is the series of z = z^2 + c.

It was implemented with numpy complex and the pillow image library.

To increase performance this could be implemented in GLSL, since the operation is easily separable to multiple threads. For example one thread per pixel, as available in the fragment shader.

#!/usr/bin/env python3
from numpy import complex, array
from PIL import Image
import colorsys
WIDTH = 1024
def i_to_rgb(i):
color = 255 * array(colorsys.hsv_to_rgb(i/255.0, 1.0, 0.5))
return tuple(color.astype(int))
def mandelbrot(x, y):
c0 = complex(x, y)
c = 0
for i in range(1, 1000):
if abs(c) > 2:
return i_to_rgb(i)
c = c * c + c0
return (0, 0, 0)
img ='RGB', (WIDTH, int(WIDTH/2)))
pixels = img.load()
for x in range(img.size[0]):
print ("%.2f%%" % (x/WIDTH * 100.0))
for y in range(img.size[1]):
pixels[x,y] = mandelbrot((x - (0.75 * WIDTH))/(WIDTH/4),
(y - (WIDTH/4))/(WIDTH/4))
view raw hosted with ❤ by GitHub