Motion Capture Infant Avatar Project

This video documents the proof-of-concept development of a 3D infant avatar that displays facial expressions retargeted from an actor (Me!) to the avatar. The base 3D model for the avatar was purchased from TurboSquid. Motion capture of my facial expressions was done using Faceshift software and a Kinect for XBox 360. Faceshift was also used to create and record the initial retargeting sequence as an fbx file. Editing of both the base 3D model and the Faceshift-generated fbx animation were done in Maya to produce the final product. This infant model is an initial prototype, meant to demonstrate the potential use of motion capture to develop a believable, interactive virtual infant. Because the use of expressive and socially engaging computer avatars (relational agents) is an open topic for research, I have shown how the model might be tested using a tool such as FaceReader, which interprets the facial expressions of the viewer. Determining whether people react similarly to a relational agent and real human beings is one potential model for determining user acceptance of this type of computer interface.

MoCap Final Overview from Crystal Butler on Vimeo.