Compositional Video Generation as Flow Equalization
arxiv(2024)
Abstract
Large-scale Text-to-Video (T2V) diffusion models have recently demonstrated
unprecedented capability to transform natural language descriptions into
stunning and photorealistic videos. Despite the promising results, a
significant challenge remains: these models struggle to fully grasp complex
compositional interactions between multiple concepts and actions. This issue
arises when some words dominantly influence the final video, overshadowing
other concepts.To tackle this problem, we introduce \textbf{Vico}, a generic
framework for compositional video generation that explicitly ensures all
concepts are represented properly. At its core, Vico analyzes how input tokens
influence the generated video, and adjusts the model to prevent any single
concept from dominating. Specifically, Vico extracts attention weights from all
layers to build a spatial-temporal attention graph, and then estimates the
influence as the \emph{max-flow} from the source text token to the video target
token. Although the direct computation of attention flow in diffusion models is
typically infeasible, we devise an efficient approximation based on subgraph
flows and employ a fast and vectorized implementation, which in turn makes the
flow computation manageable and differentiable. By updating the noisy latent to
balance these flows, Vico captures complex interactions and consequently
produces videos that closely adhere to textual descriptions. We apply our
method to multiple diffusion-based video models for compositional T2V and video
editing. Empirical results demonstrate that our framework significantly
enhances the compositional richness and accuracy of the generated videos. Visit
our website
at~\href{https://adamdad.github.io/vico/}{\url{https://adamdad.github.io/vico/}}.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined