摘要

We present a novel approach for synthesizing the dynamic facial expressions of the source subject and transferring the dynamic expression to the target subject. The synthesized animation of the target subject preserves both the facial appearance of the target subject and expression deformation of the source subject. We use active appearance model to separate and align the shapes and texture of the multi-expression facial images. The dynamic facial expressions of the source subject are obtained by the nonlinear TensorFace trained on a small sample size. Through interpolating the aligned sequential shapes of different expressions, we obtain the smooth shape variations under different expressions, according to which we warp the neutral faces to other expressions. However, the warped expressions are missing of the expression details. We transfer the facial detail obtained by nonlinear TensorFace to the warped dynamic expression faces with the proposed strategy. Experiments on the extended Cohn-Kanade (CK+) facial expression database show that our results have higher perceptual quality than state-of-the-art methods.

全文