July 20, 2017

Body Tracking and Augmented Reality Body Effects

Body Tracking

In addition to face tracking, we are actively working on body tracking and full body effects.

Below a video “leaked” directly from our labs.

The video shows some effects created to test the progress in the body tracking, effects and animation.
The footage and the main actor are very below our quality standards but this video was planned to not be shown outside our developers guild.

You can see props attached on limbs, cloth dynamics and full body particles effects.

For this kind of effects we chosen Unity 3D game engine because of its easy and powerful rendering and animation features.

On the hardware side we are using a mid-high Microsoft Windows PC with NVidia GPU and Intel I7 CPU.

We used a Microsoft Kinect V2 sensor as 3D sensor, mainly because of its easy integration with Windows and Unity and stable drivers and libraries.

Technical challenges

  1. Noise

    kinect2_depth

    The Kinect joints detection is done using a grayscale image called Z-Image where every gradation of gray for every pixel represents a distance from the sensor. To get this kind of image, Kinect v2 pulses an infrared light at 300 hz and detect the time the lights take to do the round trip for every pixel. This method – called ToF or Time of Flight – works very well but is not error free. Different kinds of surfaces reflect IR light in different ways and thus make an even surface looks more like boiling than still water.

    This noisy data could drive to wobbling joints detection. To limit the error the data are integrated over time by a smoothing function. In our use case we traded between smoothness and reactivity.

  2. Auto size

    autosize

    With several users, we have faced the need to automatically scale the avatar model to the size of the user since deforming the avatar over the joints was not an option.

    We chose to link 3 joints between them on anatomical / scientific basis.

    The ratio between the sides of the red triangle gave us a scale factor to apply to the standard avatar model.

     

  3. Occlusion

    occlusion

    Occlusion is one of the most important aspect when an augmented image needs to be felt like it is really there.

    If an image or an object is between the camera and the user with no other elements in front of it, then you don’t have to worry about occlusion. But if the same object is behind the user, then you need something to “erase” the pixels of the object that will be occluded by the user as if the object was really behind the user.

    To achieve this effect, you need something that follows the silhouette of the user and writes on the Z-buffer (a grayscale image that is used to assign and compare the depth of every pixel) with the estimated depth of the user’s pixels.

    We tried to use the standard avatar with a spatial mapping shader (Write Z-Only) with mixed results as it shown in the picture to the right.

    When we started to mix the depth data from the sensor with the depth data of the 3D engine, we got better results but it still needs additional effort for a perfect occlusion.

  4. Lighting and Environment

    Lighting

    Same as occlusion, to match the environment light condition is a must for every augmented reality application.

    We built an easy setup to be configured based on the place where the app will runs.

    An HDRI environmental map for lighting and reflections can also be used.

    A physical based rendering algorithm has been used to maximize the beauty of the detailed and metallic surfaces.

     

  5. Clothes dynamics and collisions

    Cloth_and_Collision

    To test a super hero’s mantle effect, we relied on the clothes dynamics inside Unity. To make the mantle interact (collide) realistically with the user’s avatar, we spawn a lot of capsule colliders on every joint that will interact with the cape.

    Multiple capsule colliders are a lot faster than a single mesh collider and can fit easily limbs and torso bounding shapes.

    This was just a proof of concept but to simulate correctly clothes and mantles needs more effort especially on the clothes simulation configuration, where the weight, strength and elasticity of the fabric needs to be approximated better.

  6. Face tracking via MoodMe Face Tracker

    The HD face tracker from Microsoft’s Kinect was not as good as we needed so we deployed our main battle tank, the MoodMe Face Tracker, in the war zone. To follow better the integration between our SDK and Unity 3d you can refer to this page: MoodMe SDK Unity Plugin