摘要

We propose a motion-field-based data-driven approach that animates static hair models from a set of precomputed motion data. We first sample a motion sequence to construct a motion database from physics-based hair simulation data or dynamic hair capture data. We also define the preliminary definitions of motion states and construct a motion field for this motion database. Finally, we generate a sequence of target hairstyle from the input motion field by strand correspondence and motion control. Experimental results show that our approach achieves comparable quality with physics-based methods but in orders of magnitude faster performance.