摘要

Expressive facial animations are essential to enhance the realism and tire credibility of virtual characters. Parameter-based animation methods offer a precise control over facial configurations while performance-based animation benefits from the naturalness of captured human motion. In this paper, we propose art animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create tire human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct all analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation oil a synthetic face front a video source. Moreover, the topological characteristics of the appearance space allow its to detect the principal variation patterns of a face and automatically reorganize them on a low-dimensional control space. Tire control space acts as air interactive user-interface to manipulate tire facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter-based system and tire realism of real human motion.

  • 出版日期2010-2