Getting it Right: Improving Spatial Consistency in Text-to-Image Models
arxiv(2024)
摘要
One of the key shortcomings in current text-to-image (T2I) models is their
inability to consistently generate images which faithfully follow the spatial
relationships specified in the text prompt. In this paper, we offer a
comprehensive investigation of this limitation, while also developing datasets
and methods that achieve state-of-the-art performance. First, we find that
current vision-language datasets do not represent spatial relationships well
enough; to alleviate this bottleneck, we create SPRIGHT, the first
spatially-focused, large scale dataset, by re-captioning 6 million images from
4 widely used vision datasets. Through a 3-fold evaluation and analysis
pipeline, we find that SPRIGHT largely improves upon existing datasets in
capturing spatial relationships. To demonstrate its efficacy, we leverage only
0.25
accurate images while also improving the FID and CMMD scores. Secondly, we find
that training on images containing a large number of objects results in
substantial improvements in spatial consistency. Notably, we attain
state-of-the-art on T2I-CompBench with a spatial score of 0.2133, by
fine-tuning on <500 images. Finally, through a set of controlled experiments
and ablations, we document multiple findings that we believe will enhance the
understanding of factors that affect spatial consistency in text-to-image
models. We publicly release our dataset and model to foster further research in
this area.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要