DiffUHaul: A Training-Free Method for Object Dragging in Images
arxiv(2024)
摘要
Text-to-image diffusion models have proven effective for solving many image
editing tasks. However, the seemingly straightforward task of seamlessly
relocating objects within a scene remains surprisingly challenging. Existing
methods addressing this problem often struggle to function reliably in
real-world scenarios due to lacking spatial reasoning. In this work, we propose
a training-free method, dubbed DiffUHaul, that harnesses the spatial
understanding of a localized text-to-image model, for the object dragging task.
Blindly manipulating layout inputs of the localized model tends to cause low
editing performance due to the intrinsic entanglement of object representation
in the model. To this end, we first apply attention masking in each denoising
step to make the generation more disentangled across different objects and
adopt the self-attention sharing mechanism to preserve the high-level object
appearance. Furthermore, we propose a new diffusion anchoring technique: in the
early denoising steps, we interpolate the attention features between source and
target images to smoothly fuse new layouts with the original appearance; in the
later denoising steps, we pass the localized features from the source images to
the interpolated images to retain fine-grained object details. To adapt
DiffUHaul to real-image editing, we apply a DDPM self-attention bucketing that
can better reconstruct real images with the localized model. Finally, we
introduce an automated evaluation pipeline for this task and showcase the
efficacy of our method. Our results are reinforced through a user preference
study.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要