Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation
arxiv(2024)
摘要
Image diffusion distillation achieves high-fidelity generation with very few
sampling steps. However, applying these techniques directly to video diffusion
often results in unsatisfactory frame quality due to the limited visual quality
in public video datasets. This affects the performance of both teacher and
student video diffusion models. Our study aims to improve video diffusion
distillation while improving frame appearance using abundant high-quality image
data. We propose motion consistency model (MCM), a single-stage video diffusion
distillation method that disentangles motion and appearance learning.
Specifically, MCM includes a video consistency model that distills motion from
the video teacher model, and an image discriminator that enhances frame
appearance to match high-quality image data. This combination presents two
challenges: (1) conflicting frame learning objectives, as video distillation
learns from low-quality video frames while the image discriminator targets
high-quality images; and (2) training-inference discrepancies due to the
differing quality of video samples used during training and inference. To
address these challenges, we introduce disentangled motion distillation and
mixed trajectory distillation. The former applies the distillation objective
solely to the motion representation, while the latter mitigates
training-inference discrepancies by mixing distillation trajectories from both
the low- and high-quality video domains. Extensive experiments show that our
MCM achieves the state-of-the-art video diffusion distillation performance.
Additionally, our method can enhance frame quality in video diffusion models,
producing frames with high aesthetic scores or specific styles without
corresponding video data.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要