VEnhancer: Generative Space-Time Enhancement for Video Generation
arxiv(2024)
摘要
We present VEnhancer, a generative space-time enhancement framework that
improves the existing text-to-video results by adding more details in spatial
domain and synthetic detailed motion in temporal domain. Given a generated
low-quality video, our approach can increase its spatial and temporal
resolution simultaneously with arbitrary up-sampling space and time scales
through a unified video diffusion model. Furthermore, VEnhancer effectively
removes generated spatial artifacts and temporal flickering of generated
videos. To achieve this, basing on a pretrained video diffusion model, we train
a video ControlNet and inject it to the diffusion model as a condition on low
frame-rate and low-resolution videos. To effectively train this video
ControlNet, we design space-time data augmentation as well as video-aware
conditioning. Benefiting from the above designs, VEnhancer yields to be stable
during training and shares an elegant end-to-end training manner. Extensive
experiments show that VEnhancer surpasses existing state-of-the-art video
super-resolution and space-time super-resolution methods in enhancing
AI-generated videos. Moreover, with VEnhancer, exisiting open-source
state-of-the-art text-to-video method, VideoCrafter-2, reaches the top one in
video generation benchmark – VBench.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要