Multimodal-Conditioned Latent Diffusion Models for Fashion Image Editing
arxiv(2024)
摘要
Fashion illustration is a crucial medium for designers to convey their
creative vision and transform design concepts into tangible representations
that showcase the interplay between clothing and the human body. In the context
of fashion design, computer vision techniques have the potential to enhance and
streamline the design process. Departing from prior research primarily focused
on virtual try-on, this paper tackles the task of multimodal-conditioned
fashion image editing. Our approach aims to generate human-centric fashion
images guided by multimodal prompts, including text, human body poses, garment
sketches, and fabric textures. To address this problem, we propose extending
latent diffusion models to incorporate these multiple modalities and modifying
the structure of the denoising network, taking multimodal prompts as input. To
condition the proposed architecture on fabric textures, we employ textual
inversion techniques and let diverse cross-attention layers of the denoising
network attend to textual and texture information, thus incorporating different
granularity conditioning details. Given the lack of datasets for the task, we
extend two existing fashion datasets, Dress Code and VITON-HD, with multimodal
annotations. Experimental evaluations demonstrate the effectiveness of our
proposed approach in terms of realism and coherence concerning the provided
multimodal inputs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要