VTON-MP: Multi-Pose Virtual Try-On via Appearance Flow and Feature Filtering

IEEE TRANSACTIONS ON CONSUMER ELECTRONICS(2023)

Cited 0|Views2
No score
Abstract
Multi-pose virtual try-on has become a research focus for online clothes shopping due to the fixed-pose virtual try-on methods that cannot provide a different pose try-on effect. The challenge of multi-pose virtual try-on is that the detailed information of a generated image is difficult to obtain in the pose transformation and garment distortion. To solve the issue, we propose a multi-pose virtual try-on method via appearance flow and feature filtering (VTON-MP). First, a segmentation generation network of 2D keypoints about the target pose is used to predict the body semantic distribution of the target pose. Second, the desired garment is distorted to correspond to the body posture using the appearance flow figure alignment network (AFFAN). Third, latent useless feature weights are restrained using a filtering-enhancement block (FEB), and effective appearance feature weights are enhanced. Finally, the spatial relationship of body parts in the resulting image is further optimized using spatially-adaptive instance normalization (SAIN). Compared to state-of-the-art methods of subjective and objective experiments on the MPV dataset, the proposed VTON-MP achieves the best performance in terms of SSIM, PSNR and FID. The experimental results demonstrate that the proposed algorithm can better retain image details (head, hands, arms, and trousers).
More
Translated text
Key words
Clothing,Semantics,Distortion,Task analysis,Computational modeling,Faces,Image synthesis,Terms-Virtual try-on,appearance flow,semantic segmentation,instance normalization,feature filtering
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined