Holochat – An Holographic Telecommunication Demo

In July 2015 I did a VR demo on the CV Tag at the University of Koblenz, which uses two Arch Linux PCs with two Oculus Rift DK2s and two Kinect v2s. It utilizes ROS‘s capabilities to stream ROS topics over the network.

To run Holochat you need to setup libfreenect2 and ROS first, explained in Viewing Kinect v2 Point Clouds with ROS in Arch Linux.

ROS Topics

You can list the ROS topics with following command

rostopic list

You will see that kinect2_bridge provides topics in 3 different resolutions: [ hd | qhd | sd ], and also different formats and the option for compression. The IR image is only available in SD, due to the sensor size.

To test their bandwidth you can use

$ rostopic bw /kinect2/hd/image_color
subscribed to [/kinect2/hd/image_color]
average: 101.23MB/s
 mean: 6.22MB min: 6.22MB max: 6.22MB window: 14
average: 96.92MB/s
 mean: 6.22MB min: 6.22MB max: 6.22MB window: 29

You will notice that the uncompressed color image is ~101.23MB/s , the calculated uncompressed depth image is ~125.66MB/s and the uncompressed IR image is only 12.75MB/s.

By default kinect2_viewer accesses /kinect2/qhd/image_color_rect and /kinect2/qhd/image_depth_rect. The IR mode has a lower bandwidth, since sd/image_ir_rect and sd/image_depth_rect combined require only ~28MB/s, and compressed ~17MB/s, which should be achievable over 100MBit/s LAN.

The depth buffer + IR point cloud will look like this

selfie

You can set topic options as explained on the help page

$ rosrun kinect2_viewer kinect2_viewer -h
/opt/ros/jade/lib/kinect2_viewer/kinect2_viewer [options]
 name: 'any string' equals to the kinect2_bridge topic base name
 mode: 'qhd', 'hd', 'sd' or 'ir'
 visualization: 'image', 'cloud' or 'both'
 options:
 'compressed' use compressed instead of raw topics
 'approx' use approximate time synchronization

Lets add some VR support to the viewer

In order to make this holographic I patched the kinect2_viewer with OculusSDK support. Since the SDK is no longer maintained, I recommend a modified version from jherico, which can be found in oculus-rift-sdk-jherico-git on the AUR.

At the time I wrote the patches OpenHMD was lacking head tracking support and libraries like Valve’s OpenVR and Razer’s OSVR were not around. They still are not really usable with the DK2 at the time I am writing this article.

My patched iai-kinect branch can be found on GitHub. I made an AUR package with the viewer and my VR patches.

$ pacaur -S ros-jade-kinect2-viewer-oculus

To run the patched viewer you need to have ovrd running. If this does not work out, try killing it and running it as root. It auto starts with your session and does not exit if it lacks the udev rights. Make sure you have oculus-udev installed.

Make sure your headset is available in

$ OculusConfigUtil

When the viewer starts, you need to manually maximize it on the VR headset😉 The user base of the demo (me) thought this was sufficient.

ROS Networking

ROS prints host name and port when you start roscore

$ roscore
...
started roslaunch server http://kinecthost:43272/

If you have the above setup on two machines, you can run kinect2_bridge on the host as usual. On the client you need to provide the host’s ROS_MASTER_URI when running the viewer.

$ ROS_MASTER_URI=http://kinecthost:11311/ rosrun kinect2_viewer kinect2_viewer ir

Adding a simple audio stream using GStreamer

To run the following pipelines, you need to install the GStreamer Good Plugins. The pipeline use your PulseAudio default devices for recording and playback. You can set them for example in GNOME’s audio settings.

On the host run:

gst-launch-1.0 udpsrc port=5000 \
                      caps='application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)2, payload=(int)256' ! \
    rtpL16depay ! \
    pulsesink sync=false

On the client you can run following pipeline for playback. You need to change soundhost to the host name / IP of the sound host.

gst-launch-1.0 pulsesrc ! audioconvert ! \
    audio/x-raw, channels=1, rate=44100 ! \
    rtpL16pay ! udpsink host=soundhost port=5000

Putting everything together

Now you just need 2 DK2s, 2 Kinect v2s and to do this setup on 2 machines so you can have holographic video conferences. Have fun with that.

I

Viewing Kinect v2 Point Clouds with ROS in Arch Linux

Using libfreenect2 and the IAI Kinect v2 ROS modules you can easily watch a point cloud generated with your Kinect v2 sensor.

You need a Kinect for Windows v2, which is a Xbox One Kinect with an adapter to standard USB 3 from Microsoft’s proprietary connecor and hence, a PC with an USB 3 port.

$ pacaur -S ros-jade-kinect2

Using my ros-jade-kinect2 AUR package, you can install all required dependencies, such as a ton of ROS packages, Point Cloud Library and libfreenect2, which are all available in the Arch User Repository.

Testing libfreenect2

After installing libreenect2 you can test your Kinect v2 with the following command

$ Pronect

If everything runs fine you will get an image like this from Pronectpronect

In the image above you can see the unprocessed infrared image (top left), the color sensor image mapped to the calculated depth (top right) , the unprocessed color image (bottom left) and the calculated depth image (bottom right).

By default Pronect uses an OpenGL to generate the depth image. To test libfreenect2’s different DepthPacketProcessor backends you can do

$ Protonect cl

Possible backends are: [gl | cl | cuda | cpu]

A possible error can be insufficient permissions for the USB device:

[Error] [Freenect2Impl] failed to open Kinect v2: @2:9 LIBUSB_ERROR_ACCESS Access denied (insufficient permissions)

libfreenect provides a /usr/lib/udev/rules.d/90-kinect2.rules file which gives the Kinect the udev tag uaccess to provide user access. The error is generated when this did not work out. It can be fixed with a relogin. udevadm control -R didn’t seem to work. Running Pronect with sudo will also help temporally.

Using ROS

You can enter your ROS environment with

$ source /opt/ros/jade/setup.bash

You probably should create an alias for this environment in your shell config.

Now you can launch the roscore and leave it in a separate shell.

$ roscore

Install ros-jade-rosbash for rosrun. You now can list the options of the kinect2_bridge module.

$ rosrun kinect2_bridge kinect2_bridge -h

The default options for kinect2_bridge are OpenCL registration and OpenGL depth method. You can start it like this

$ rosrun kinect2_bridge kinect2_bridge

Possible Problems

This fails for me on NVIDIA with

[ERROR] [Kinect2Bridge::start] Initialization failed!

This is due to the OpenCL registration method failing to initialize the OpenCL device.

A different error occurs with the beignet OpenCL implementation for Intel. It seems the OpenCL registration method does not support beignet’s shader compiler.

[ INFO] [DepthRegistrationOpenCL::init] devices:
[ INFO] [DepthRegistrationOpenCL::init] 0: Intel(R) HD Graphics Haswell Ultrabook GT3 Mobile (GPU)[Intel]
[ INFO] [DepthRegistrationOpenCL::init] selected device: Intel(R) HD Graphics Haswell Ultrabook GT3 Mobile (GPU)[Intel]
[ERROR] [DepthRegistrationOpenCL::init] [depth_registration_opencl.cpp](216) data->program.build(options.c_str()) failed: -11
[ERROR] [DepthRegistrationOpenCL::init] failed to build program: -11
[ERROR] [DepthRegistrationOpenCL::init] Build Status: -2
[ERROR] [DepthRegistrationOpenCL::init] Build Options:
[ERROR] [DepthRegistrationOpenCL::init] Build Log: stringInput.cl:190:31: error: call to 'sqrt' is ambiguous

This can be solved by using the CPU registration method

$ rosrun kinect2_bridge kinect2_bridge _reg_method:=cpu

The OpenCL depth method with beignet produces a black screen. This can be solved by using the OpenGL depth method, it works fine with mesa.

$ rosrun kinect2_bridge kinect2_bridge _reg_method:=cpu _depth_method:=opengl

Viewing the Point Cloud

Finally, open a shell in the ros environment and launch the viewer:

$ rosrun kinect2_viewer kinect2_viewer

This will show you a point cloud with the color sensor mapped to the depth buffer. It will look slightly shifted. You need to calibrate your Kinect in order to have a better mapping.

color

You can also run the viewer in ir mode to see only the depth sensor.

$ rosrun kinect2_viewer kinect2_viewer ir

ir

Congratulations, you can do Point Cloud Selfies now

selfie

For more information about ROS Jade on Arch Linux, see http://wiki.ros.org/jade/Installation/Arch

Tracing OpenGL Errors with Mesa

Debugging GL Errors can be a time consuming task sometimes. Usually you need to query the OpenGL state machine with glGetError, which returns just an integer of the latest error.

First of all this requires a switch for checking the type of error, which could look like this:

When you execute this code, the loop will print all errors on the stack. This does not tell you when the error occurred, just that it happened before calling the function.

With Mesa you can do the same by setting an environment variable.

$ export MESA_DEBUG=1

This will give you debug output similar to this:

Mesa: User error: GL_INVALID_VALUE in glClear(0x5f01)

The usual solution is to create a macro function, which prints in which line you executed the query function, and you put a call to it in the end of every code that calls the OpenGL API.

This has to be done on propriatory drivers like NVIDIA’s, since you do not have debug information. A better approach is to get a backtrace to every failing GL call. For this, you need to rebuild your Mesa libGL.so with debug symbols, or install a debug package provided by your distribution.

To build Mesa with debug symbols you have to set the following compiler options:

export CFLAGS='-O0 -ggdb3'
export CXXFLAGS='-O0 -ggdb3'

On Arch Linux this can be done in the build() function, when building mesa from ABS or mesa-git from AUR.

You will then be able to receive a backtrace with gdb. Do not forget to build your application with debug symbols. For cmake projects, like mesa-demos you can achieve this by doing

$ cmake . -DCMAKE_BUILD_TYPE=Debug

Run the application in GDB

$ gdb ./src/glsl/geom-outlining-150

Break with b on _mesa_error in the gdb command line

(gdb) b _mesa_error
Function "_mesa_error" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (_mesa_error) pending.

Run the program with r and receive a backtrace with bt

(gdb) r
Starting program: mesa-demos/src/glsl/geom-outlining-150 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Breakpoint 1, 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/i965_dri.so
(gdb) bt
#0 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/i965_dri.so
#1 0x0000000000403555 in Redisplay () at mesa-demos/src/glsl/geom-outlining-150.c:119
#2 0x00007ffff7313741 in fghRedrawWindow () from /usr/lib/libglut.so.3
#3 0x00007ffff7313ba8 in ?? () from /usr/lib/libglut.so.3
#4 0x00007ffff7315179 in fgEnumWindows () from /usr/lib/libglut.so.3
#5 0x00007ffff7313c84 in glutMainLoopEvent () from /usr/lib/libglut.so.3
#6 0x00007ffff7313d24 in glutMainLoop () from /usr/lib/libglut.so.3
#7 0x0000000000403e41 in main (argc=1, argv=0x7fffffffe558) at mesa-demos/src/glsl/geom-outlining-150.c:392

Now we know that the GL error occurs in line 392 of geo-outlining-150.c.

You get GL errors when using GLEW in core GL contexts, since its calling the deprecated GL_EXTENSIONS enum for glGetString. You can continue debugging with c. If you want to use a modern way to load core context, try gl3w instead of GLEW.

Breakpoint 1, 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/i965_dri.so
(gdb) bt
#0 0x00007ffff1f57550 in _mesa_error () from /usr/lib/xorg/modules/dri/i965_dri.so
#1 0x00007ffff1fcb7b4 in _mesa_GetString () from /usr/lib/xorg/modules/dri/i965_dri.so
#2 0x00007ffff70ba76a in glewInit () from /usr/lib/libGLEW.so.1.13
#3 0x0000000000403deb in main (argc=1, argv=0x7fffffffe558) at mesa-demos/src/glsl/geom-outlining-150.c:381
(gdb) c
Continuing.
Mesa: User error: GL_INVALID_ENUM in glGetString(GL_EXTENSIONS)

Got Mandelbrot? A Mandelbrot set test pattern for the GStreamer OpenGL Plugins

I wanted to introduce you into the next station in my Mandelbrot world domination tour. Using C and GLSL this Mandelbrot set visualization can be rendered to a video file.

$ gst-launch-1.0 gltestsrc pattern=13 ! video/x-raw, width=1920, height=1080 ! glimagesink

To run this command you need the patches for gst-plugins-bad I proposed upstream. After they get merged you can try this in your distribution’s GStreamer version.

This is the short patch required for the Mandelbrot set, made possible with the patch adding a generic shader pipeline for gltestsrc patterns.

Screenshot from 2014-08-21 02:24:08

Simple Mandelbrot Set Visualization in GLSL with WebGL

After experimenting with pyopengl and OpenGL 3.3 core pipelines in Python I noticed that Python 3 OpenGL 3 support needs a little more bindings work. Especially the GLEW stuff. Since WebGL does not need such things as bindings, and is suitable of creating a modern OpenGL ES 2 like pipeline, I decided to give it a try for the Mandelbrot Set.

The result are these ~130 Lines of HTML, CSS, JavaScript and GLSL.

You can give the demo a try at JSFiddle.

If you want to see some funky stuff, uncomment line 49.

Simple Mandelbrot Set Visualization in Python 3

Since I am currently studying for a Analysis exam and have always been fascinated by fractals, I wrote a small Mandelbrot set visualization in Python.

The core formula is the series of z = z^2 + c.

It was implemented with numpy complex and the pillow image library.

To increase performance this could be implemented in GLSL, since the operation is easily separable to multiple threads. For example one thread per pixel, as available in the fragment shader.

HoVR – A Blender Game Engine Demo for the Oculus Rift and the Nintendo Balance Board

HoVR screenshot

HoVR is a virtual reality hover boarding demo. I created it in July 2013 for the CV-Tag, my University’s Demo day. Sadly I didn’t find the time to publish it. Since I got the opportunity to demo it on this year’s gamescom (Halle 10.2, E017), I thought it may be also a good idea to release it.

HoVR is written in Python and uses the Blender Game Engine to render it’s graphics and calculate the physics simulation. It uses the Python bindings I made and published last year for the Oculus Rift. I also made Wii Balance Girl playing HoVRBoard Python bindings. They utilize the c libraries OpenHMD for the rift and WiiC for the board. You can find python-rift and python-balanceboard in my Github, or try the Arch Linux AUR packages.

Furthermore HoVR uses assets and rendering made by Martins Upitis. He released his wonderful Blender Game Engine water demo on his blog.

You can download HoVR from my Github, or install it easily in Arch Linux with the AUR Package.

I could provide bootable USB images, if there is any interest.

Things you need:

  • Oculus Rift (tested with DK1)
  • Wii Balance Board
  • Bluetooth Dongle
  • Arch Linux or other Unix
  • Have a little talent with hacking and stuff, until I create a convenient way of running this

Windows users could try MSYS2, but they would need to port the packages. MacOS wasn’t tested, but should work theoretically.

HoVR setup in my Room