September 7, 2016

Face Recognition SDK

Face recognition SDKMoodMe Face Tracking and Face Recognition SDK (Software Development Kit) for iOS, Android, Linux & Windows (Face SDK) enables you to create applications that can recognize human faces in real time.

MoodMe Face Recognition is also capable of recognizing certain animal faces such as cats, dogs and horses. More animals are being added as we enrich our machine learning data sets.

The Face Recognition SDK engine analyzes input from a video stream (a live feed from a camera or a video file), deduces the head pose and facial features and expressions and makes that information available to an application in real time.

Use cases that are enabled by MoodMe Face Recognition SDK include the rendering of a tracked person’s head 3 degrees of freedom orientation and facial expression on an avatar in a telecommunication application or a game or to drive a natural user interface.

Technical Specifications

Coordinate System

MoodMe Face Recognition SDK uses the following coordinate system to output the results of its 3D tracking.

Figure 1.  Camera Space

web-sdk-mask

The 3D mask is computed and has coordinates that place it over the user’s face (in the camera’s coordinate frame) as shown in Figure 1 – Camera Space.

Input Images

MoodMe Face Recognition SDK accepts color as input.

The quality of the recognition and tracking may be affected by the image quality of these input frames: hyper exposed, high light intensity or on the contrary darker frames as well as or fuzzier frames are difficult to recognize than brighter or sharp frames. The definition and size of faces also influences the quality of the recognition: larger or closer faces are easier to recognize than smaller faces.

Explanations about MoodMe Face Recognition SDK

Once you have obtained a video frame from your camera (or other video source) as a pixel buffer, you have to call [MDMTrackerManager processImageBuffer:frame];

At this point you have to check if the tracker has recognized a face on the frame checking MDMTrackerManager.faceTracked flag.

If the face was successfully tracked you can take (in this example, we consider a face recognition returning 66 coordinate points. Note that our Face Recognition SDK is capable of identifying up to 150 facial landmarks / face features points, if the underlying device, mobile or desktop, is powerful enough).

  1.  2D landmarks as array of floats formatted as x1,y1,x2,y2,…x66,y66  where x and y are the absolute 2D coordinates of landmarks
  2.  3D vertices as array of floats formatted as x1,y1,z1,x2,y2,z2,…x66,y66,z66  where x, y, z are the 3D vertices
  3.  ModelView transformation matrix to apply to the 3D vertices model

P.S.  3D vertices buffer is ready to be used with OpenGL directly.

2D Mesh and Points

The Face Recognition SDK tracks between 66 and 150 2D and 3D facial landmarks / face features points indicated in the following image.

Figure 2.  Tracked Points

web-sdk-mask-landmarks-png

The above data output (1,2,3) is enough to draw whatever the application developer wants to with graphical libraries such as OpenGL.

In order to paint a skin over a face, the application developer has to render two (OpenGL) layers:

  1. the camera video frame for background
  2. a texture mapped over the 3D vertices transformed by ModelView matrix

3D Head Pose

The X,Y, and Z position of the user’s head are reported based on a coordinate system.

The user’s head pose is captured by three angles: pitch, roll, and yaw.web-sdk-mask-3dof-jpg

Figure 3.  Head Pose Angles

Face Recognition SDK Code Samples

The Face Recognition and Tracking SDK can include native code samples that demonstrate basic functionality of the SDK and the use of various parameters to optimize performance.

The samples demonstrate, among others:

  • how to track an individual face and animate corresponding parameters onto an avatar like object.
  • similar functionality with multiple faces.
  • how various modes and settings within the Face Recognition SDK can be modified to optimize performance.
  • How to generate and store a video (or a GIF or a Picture) from a sequence of face tracking.
  • How to share the video (GIF, Picture) in social media and instant messaging.
  • How to apply Augmented Reality effects with rigid 3D Objects.
  • How to apply Augmented Reality effects with flexible 3D Objects and particles.
  • How to apply Augmented Reality effects with 3D animations.

Such samples are available on demand and may be bundled with the Face Recognition SDK for paying customers.