FastVideoEdit: Leveraging Consistency Models for Efficient Text-to-Video Editing
arxiv(2024)
摘要
Diffusion models have demonstrated remarkable capabilities in text-to-image
and text-to-video generation, opening up possibilities for video editing based
on textual input. However, the computational cost associated with sequential
sampling in diffusion models poses challenges for efficient video editing.
Existing approaches relying on image generation models for video editing suffer
from time-consuming one-shot fine-tuning, additional condition extraction, or
DDIM inversion, making real-time applications impractical. In this work, we
propose FastVideoEdit, an efficient zero-shot video editing approach inspired
by Consistency Models (CMs). By leveraging the self-consistency property of
CMs, we eliminate the need for time-consuming inversion or additional condition
extraction, reducing editing time. Our method enables direct mapping from
source video to target video with strong preservation ability utilizing a
special variance schedule. This results in improved speed advantages, as fewer
sampling steps can be used while maintaining comparable generation quality.
Experimental results validate the state-of-the-art performance and speed
advantages of FastVideoEdit across evaluation metrics encompassing editing
speed, temporal consistency, and text-video alignment.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要