A modular framework for performance-based facial animation

Carlos Cubas, Antonio Carlos Sementille

Abstract


In recent decades, interest in capturing human face movements and identifying expressions for the purpose of generating realistic facial animations has increased in both the scientific community and the entertainment industry. We present a modular framework for testing algorithms used in performance-based facial animation. The framework includes the modules used in pipelines found in the literature as a module for creating datasets of blendshapes which are, facial models, where the vectors represent individual facial expressions, an algorithm processing module for identification of weights and, finally, a redirection module that creates a virtual face based on blendshapes. The framework uses a RGB-D camera, the RealSense F200 camera from Intel.

Keywords


Capture; Facial; Movements; Blendshapes; retargeting.

Full Text:

PDF