Better Fit: Accommodate Variations in Clothing Types for Virtual Try-on
CoRR(2024)
Abstract
Image-based virtual try-on aims to transfer target in-shop clothing to a
dressed model image, the objectives of which are totally taking off original
clothing while preserving the contents outside of the try-on area, naturally
wearing target clothing and correctly inpainting the gap between target
clothing and original clothing. Tremendous efforts have been made to facilitate
this popular research area, but cannot keep the type of target clothing with
the try-on area affected by original clothing. In this paper, we focus on the
unpaired virtual try-on situation where target clothing and original clothing
on the model are different, i.e., the practical scenario. To break the
correlation between the try-on area and the original clothing and make the
model learn the correct information to inpaint, we propose an adaptive mask
training paradigm that dynamically adjusts training masks. It not only improves
the alignment and fit of clothing but also significantly enhances the fidelity
of virtual try-on experience. Furthermore, we for the first time propose two
metrics for unpaired try-on evaluation, the Semantic-Densepose-Ratio (SDR) and
Skeleton-LPIPS (S-LPIPS), to evaluate the correctness of clothing type and the
accuracy of clothing texture. For unpaired try-on validation, we construct a
comprehensive cross-try-on benchmark (Cross-27) with distinctive clothing items
and model physiques, covering a broad try-on scenarios. Experiments demonstrate
the effectiveness of the proposed methods, contributing to the advancement of
virtual try-on technology and offering new insights and tools for future
research in the field. The code, model and benchmark will be publicly released.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined