DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback
CoRR(2023)
摘要
Despite their wide-spread success, Text-to-Image models (T2I) still struggle
to produce images that are both aesthetically pleasing and faithful to the
user's input text. We introduce DreamSync, a model-agnostic training algorithm
by design that improves T2I models to be faithful to the text input. DreamSync
builds off a recent insight from TIFA's evaluation framework -- that large
vision-language models (VLMs) can effectively identify the fine-grained
discrepancies between generated images and the text inputs. DreamSync uses this
insight to train T2I models without any labeled data; it improves T2I models
using its own generations. First, it prompts the model to generate several
candidate images for a given input text. Then, it uses two VLMs to select the
best generation: a Visual Question Answering model that measures the alignment
of generated images to the text, and another that measures the generation's
aesthetic quality. After selection, we use LoRA to iteratively finetune the T2I
model to guide its generation towards the selected best generations. DreamSync
does not need any additional human annotation. model architecture changes, or
reinforcement learning. Despite its simplicity, DreamSync improves both the
semantic alignment and aesthetic appeal of two diffusion-based T2I models,
evidenced by multiple benchmarks (+1.7% on TIFA, +2.9% on DSG1K, +3.4% on VILA
aesthetic) and human evaluation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要