Pose Transfer Using Multiple Input Images

Xin Jin,Chenyu Fan, Wu Zhou, Hangbing Yin,Chaoen Xiao, Chao Xia, Shengxin Li, Huilin Zhou, Yujin Li

2023 China Automation Congress (CAC)(2023)

Cited 0|Views0
No score
Abstract
One key challenge of pose transfer lies in its large variation and occlusion. Existing methods are basically aimed at the migration of the pose in a single input image. As a result, these methods are difficult to predict reasonable areas of invisibility and cannot decouple the shape and style of clothing. Although pose transfer method using single input image can generate the results with correct structure, it cannot keep the original details of the image. In order to tackle this challenge, we present a two-stage generative model designed for pose transfer using multiple input images. The initial stage involves Feature Extraction, succeeded by the Reposing Stage. During the Feature Extraction stage, pertinent features are extracted from each input image. Subsequently, in the Reposing Stage, we introduce a pose-conditioned transformer-based StyleGAN generator, adding residual module and fusion module at different levels of the generator. In this way, we can get the most relevant features from each input image for weighted fusion, thus improving the quality of the results. We show that our method compares favorably against those using single input image in both quantitative evaluation and visual comparison.
More
Translated text
Key words
Pose Transfer,Pose-guided Person Image Synthesis,Muti-source Image Generation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined