摘要

Automatically locating multiple feature points (i.e., the shape) in a facial image and then synthesizing the corresponding facial sketch are highly challenging since facial images typically exhibit a wide range of poses, expressions, and scales, and have differing degrees of illumination and/or occlusion. When the facial sketches are to be synthesized in the unique sketching style of a particular artist, the problem becomes even more complex. To resolve these problems, this paper develops an automatic facial sketch synthesis system based on a novel direct combined model (DCM) algorithm. The proposed system executes three cascaded procedures, namely, 1) synthesis of the facial shape from the input texture information (i.e., the facial image); 2) synthesis of the exaggerated facial shape from the synthesized facial shape; and 3) synthesis of a sketch from the original input image and the synthesized exaggerated shape. Previous proposals for reconstructing facial shapes and synthesizing the corresponding facial sketches are heavily reliant on the quality of the texture reconstruction results, which, in turn, are highly sensitive to occlusion and lighting effects in the input image. However, the DCM approach proposed in this paper accurately reconstructs the facial shape and then produces lifelike synthesized facial sketches without the need to recover occluded feature points or to restore the texture information lost as a result of unfavorable lighting conditions. Moreover, the DCM approach is capable of synthesizing facial sketches from input images with a wide variety of facial poses, gaze directions, and facial expressions even when such images are not included within the original training data set.

  • 出版日期2010-8