Transforming Video on the GPU

OpenGL is very suitable for calculating transformations like rotation, scale and translation. Since the video will end up on one rectangular plane, the vertex shader only needs to transform 4 vertices (or 5 with GL_TRIANGLE_STRIP) and map the texture to it. This is a piece of cake for the GPU, since it was designed to do that with many many more vertices, so the performance bottleneck will be uploading the video frame into GPU memory and downloading it again.

The transformations

GStreamer already provides some separate plugins that are basically suitable for doing one of these transformations.

Translation

videomixer: The videomixer does translation of the video with the xpos and ypos properties.

frei0r-filter-scale0tilt: The frei0r plugin is very slow, but it has the advantage of doing scale and tilt (translate) in one plugin. This is why i used it in my 2011 GSoC. It also provides a “clip” propery for cropping the video.

Rotation

rotate: The rotate element is able to rotate the video, but it has to be applied after the other transformations, unless you want borders.

Screenshot from 2014-06-16 17:54:44

Scale

videoscale: The videoscale element is able to resize the video, but has to be applied after the translation. Additionally it resizes the whole canvas, so it’s also not perfect.

frei0r-filter-scale0tilt: This plugin is able to scale the video, and leave the cansas size as it is. It’s disadvantage is being very slow.

So we have some plugins that do transformation in GStreamer, but you can see that using them together is quite impossible and also slow. But how slow?

Let’s see how the performance of gltransformation compares to the GStreamer CPU transformation plugins.

Benchmark time

All the commands are measured with “time”. The tests were done on the nouveau driver, using MESA as OpenGL implementation. All GPUs should have simmilar results, since not really much is calculated on them. The bottleneck should be the upload.

Pure video generation

gst-launch-1.0 videotestsrc num-buffers=10000 ! fakesink

CPU 3.259s

gst-launch-1.0 gltestsrc num-buffers=10000 ! fakesink

OpenGL 1.168s

Cool the gltestsrc seem to run faster than the classical videotestsrc. But we are not uploading real video to the GPU! This is cheating! Don’t worry, we will do real world tests with files soon.

Rotating the test source

gst-launch-1.0 videotestsrc num-buffers=10000 ! rotate angle=1.1 ! fakesink

CPU 10.158s

gst-launch-1.0 gltestsrc num-buffers=10000 ! gltransformation zrotation=1.1 ! fakesink

OpenGL 4.856s

Oh cool, we’re as twice as fast in OpenGL. This is without uploading the video to the GPU though.

Rotating a video file

In this test we will rotate a HD video file with a duration of 45 seconds. I’m replacing only the sink with fakesink. Note that the CPU rotation needs  videoconverts.

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! videoconvert ! fakesink

CPU 17.121s

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! gltransformation zrotation=1.1 ! fakesink

OpenGL 11.074s

Even with uploading the video to the GPU, we’re still faster!

Doing all 3 operations

Ok, now lets see how we perform in doing translation, scale and rotation. Note that the CPU pipeline does contain the problems described earlier.

gst-launch-1.0 videomixer sink_0::ypos=540 name=mix ! videoconvert ! fakesink filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! videoscale ! video/x-raw, width=150 ! mix.

CPU 17.117s

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! gltransformation zrotation=1.1 xtranslation=2.0 yscale=2.0 ! fakesink

OpenGL 9.465s

No surprise, it’s still faster and even correct.

frei0r-filter-scale0tilt

Let’s be unfair and benchmark the frei0r plugin. There is one advantage, that it can do translation and scale correctly, but rotation can only be applied at the end. So no rotation at different pivot points is possible.

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! frei0r-filter-scale0tilt scale-x=0.9 tilt-x=0.5 ! fakesink

CPU 35.227s

Damn, that is horribly slow.

The gltransformation plugin is up to 3 times faster than that!

Results

The gltransformation plugin does all 3 transformations together in a correct fashion and is fast in addition. Furthermore threedimensional transformations are possible, like rotating around the X axis or translating in Z. If you want, you can even use orthographic projection.

I also want to thank ystreet00 for helping me to get into the world of the GStreamer OpenGL plugins.

To run the test yourself, check out my patch for gst-plugins-bad:

https://bugzilla.gnome.org/show_bug.cgi?id=731722

Also don’t forget to use my python testing script:

https://github.com/lubosz/gst-gl-tests/blob/master/transformation.py

Graphene

gltransformation utilizes ebassi’s new graphene library, which implements linear algebra calculations needed for new world OpenGL without the fixed function pipeline.

Alternatives worth mentioning for C++ are QtMatrix4x4 and of course g-truc’s glm. These are not usable with GStreamer, and I was very happy that there was a GLib alternative.

After writing some tests and ebassi’s wonderful and quick help, Graphene was ready for usage with GStreamer!

Implementation in Pitivi

To make this transformation usable in Pitivi, we need some transformation interface. The last one I did was rendered in Cairo. Mathieu managed to get this rendered with the ClutterSink, but using GStreamer OpenGL plugins with the clutter sink is currently impossible. The solution will either be to extend the glvideosink to draw an interface over it or to make the clutter sink working with the OpenGL plugins. But I am rather not a fan of the clutter sink, since it introduced problems to Pitivi.

Oh Django…

django ((1.6))$ grep AUTH_PROFILE_MODULE * -R
django/contrib/auth/models.py: warnings.warn(“The use of AUTH_PROFILE_MODULE to define user profiles has been deprecated.”,

django/contrib/auth/models.py: ‘You need to set AUTH_PROFILE_MODULE in your project ‘

View Side-by-Side Stereoscopic Videos with GStreamer and Oculus Rift

Screenshot from 2013-08-28 16:36:17

GStreamer can do a lot. Most importantly it has the exact functionality I was looking for when I wanted to play a stereoscopic video on the Oculus Rift: Decoding a video stream and applying a GLSL fragment shader to it.

Available Players

I found a few solutions that to try to achieve that goal, but they were very unsatisfactory. Mostly they failed to decode the video stream or didn’t start for other reasons. They are not maintained that well, since they are recent one man projects with compiled only releases on forums. And worst of all, they only support Windows.

Surprisingly, I experienced the best results with OculusOverlay and VLC Player. Which transforms a hardcoded part of your desktop in a very hacky way with XNA. Works also with YouTube.

VideoPal is a player written in Java and using JOGL. In theory it could work in Linux but:

Exception in thread “main” java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons:
no swt-win32-3740 in java.library.path

Yeah.. no time for jar reverse engineering and no link to the source. I was able to run it on Windows, but it couldn’t open a H264 video.

There is also OculusPlayer using libvlc but does not release the source. The idea is good, but it didn’t work.

VR Player is a GPLv2 licenced Player written in C#. It also couldn’t decode the stream.

Another player is VR Cinema 3D, which does not play a stereo video, but simulates a virtual cinema with a 2D film. Funny idea.

Get some Stereo Videos

You can search for stereoscopic videos on YouTube with the 3D search filter. There a tons of stereoscopic videos, like this video of Piranhas.

Download the YouTube video with the YouTube downloader of you choice, which supports 3D videos. For example PwnYouTube.

For convenient usage in the terminal you should rename the file to something short and without spaces.

Using GStreamer

The minimal GStreamer pipeline for playing the video stream of a mp4 file (QuickTime / H.264 / AAC)  looks like this

$ gst-launch-1.0 filesrc location=piranhas.mp4 ! qtdemux ! avdec_h264 ! autovideosink

It contains the GStreamer elements for file source, QuickTime demuxer, H264 decoder and the automatic video sink.

If you want more information on the elements, try gst-inspect

$ gst-inspect-1.0 qtdemux

If you want audio you need to name the demuxer and add a audio queue with a decoder and an audio sink.

$ gst-launch-1.0 filesrc location=piranhas.mp4 ! qtdemux name=dmux ! avdec_h264 ! autovideosink dmux. ! queue ! faad ! autoaudiosink

Let’s add some Oculus Rift distortion now. We will use a GLSL fragment shader and the glshader element from the gst-plugins-gl package for that. Since the GStreamer GL Plugins are not released yet, you need to build them by yourself. You could use my Archlinux AUR package or the GStreamer SDK build system cerbero. Here is a tutorial how to build GStreamer with cerbero.

In the GLSL shader you can change some parameters like video resolution, eye distance, scale and kappa. This could also be done with uniforms and set by a GUI.

The final GStreamer pipeline looks like this. Since we are using a GL plugin, we need to use the glimagesink.

TL;DR

$ gst-launch-1.0 filesrc location=piranhas.mp4 ! qtdemux name=dmux ! avdec_h264 ! glshader location=distortion.frag ! glimagesink dmux. ! queue ! faad ! autoaudiosink

Seeking and full screen are features that could be achieved in a GStreamer python application.

Oculus Rift Support In Blender Game Engine

Blender Logooculus-rift

Motivation

With Blender Game Engine we would have a Free Software alternative to Unity for virtual reality demos with the Oculus Rift. Existing BGE demos could be ported easily. With BGE you can easily create and import assets to your demo.

Status of the Rift in Free Software

Since it’s release in March, the Oculus Rift has seen a rather good adoption in proprietary Game Engines. Source, UDK3 and primarily Unity have embraced the new VR technolgy early. Sadly the community was only given Unity and C/C++ as tools, so most current demos are done with Unity. Free Software like Blender was rather sceptical about implementation due to the proprietary licensing of the Oculus SDK. The SDK license demonstrates that open source does not equal free software.
Only a few Oculus Demos were available for GNU/Linux, due to there not being a release of the official SDK. Because of that proprietary demos like Team Fortress 2 also do not include Rift support. Also the Unity Demos are not built for Linux, but OS X and Windows only.
Luckily we are in the github era, where you can find software for every need on the internet. The official Oculus SDK was ported to C and Linux by nsb, but has to keep it’s license. Another interesting project is libvr by thib, which has a BSD-2-clause license. Both libraries work in GNU/Linux without problems, but the first one is not a candidate of choice for Blender, since it lacks a free software license. The third available library is OpenHMD, it is Boost licensed. I made a minimalist Python wrapper for it, so we now have the Rift sensor in BGE. I achieved this in a similar approach to my Wii Balance Board wrapper for Python / BGE.

Including Rift input to your Blender Game Engine Demo

To build my github project, you need OpenHMD installed. For convenience, I created an Arch User Repository package, for us arch users ;)
For everyone else, install it in /usr/local or write a debian package, etc.
To build python-rift, you need to run “./setup.py build”. You need to symlink the .so file to your ~/.blender directory.

Since it is in your Blender’s Python path now, you can initialize the PyRift object in BGE like this.

The rotation is acquired as an quaterion. Note that OpenHDM uses XYZW, but Blender WXYZ. I did get headaches not only from figuring that out, but also from the wrong rotation when I had the HMD on.
You can transform your camera like this.

If you have a better way to do it, please tell me. This was rather a quick hack, but pretty functional.
Nice…! The camera moves, for me with a pretty amazing low latency. Unreal didn’t have this low latency on Windows.

Rendering for the Oculus Rift in BGE

As we know the BGE supports various types of stereoscopic rendering. One of them is the once required by the rift basically: Side-By-Side. The only thing we need to do know is the reverse lens distortion transformation we can achieve with a simple fragment shader.
Different versions of this fragment shader appeared on the net. A good explanation of the method can be found on the FireBox page. Another version is the one included in the OpenHMD examples.
Sounds good, huh? Yeah, but it didn’t not work. The fragment shader transformed the Side-By-Side rendering asymmetrically, so that the left eye was smaller than the right.
Image
The interesting thing is that the output of the shader is symmetrical when rendered with other stereo options, including Above-Below and without stereo. I asked for help in a Blender Stackexchange post and on the Blenderartists forum. Moguri from the forum came up with this patch that fixes the issue. Hooray, Rift support is complete.
As I noticed, people were trying to achieve this in Blender and had similar problems, due to this bug.
If you want Oculus Rift rendering support, try my example blend file and apply Moguri’s patch to Blender.
Screenshot from 2013-06-26 16:45:29

Watching a video in GNOME Shell

A pretty annoying issue i had to deal with in GNOME3 on ArchLinux was the screen turning black every 10 minutes while watching a video. Hardware tactics to move the mouse from comfortable movie watching positions needed to be evolved. Like kicking chairs. I finally found the solution on the Fedora Forums.

The solution was to remove gnome-screensaver and install xscreensaver. In xscreensaver I could disable the screensaver.

$ sudo pacman -R gnome-screensaver

$ sudo pacman -S xscreensaver

$ xscreensaver

Image