摘要

To reconstruct a deformable clothed human from a single image or multiple images, a multi-level topological graph convolutional network (MTGCN) using SMPL is proposed. Firstly, an initial human SMPL model corresponding to the pose and shape of the human body in the image is precomputed by existing methods. Secondly, local feature map of human body is obtained by image feature extraction network. Thirdly, the vertices of SMPL model are then projected into the feature map to obtain local features of specific locations. Finally, a multi-level topological graph convolutional network is used to offset the mesh vertices for dressing effects. The down-sampling and up-sampling modules can fuse local features to obtain global features, and combine with the residual module to compensate for the missing local information on the global features, thus improving the quality of the reconstructed human body. On the synthesized dataset using MGN and SURREAL, the experimental result shows that the proposed method can produce lower chamfer distance and point-to-surface distance losses than the other similar methods, and demonstrate better results in terms of human clothing details and body parts. In addition, the reconstructed 3D human mesh can be directly deformed in pose or body shape to generate dressed human animations.

全文