Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners
CVPR 2024(2024)
摘要
Video and audio content creation serves as the core technique for the movie
industry and professional users. Recently, existing diffusion-based methods
tackle video and audio generation separately, which hinders the technique
transfer from academia to industry. In this work, we aim at filling the gap,
with a carefully designed optimization-based framework for cross-visual-audio
and joint-visual-audio generation. We observe the powerful generation ability
of off-the-shelf video or audio generation models. Thus, instead of training
the giant models from scratch, we propose to bridge the existing strong models
with a shared latent representation space. Specifically, we propose a
multimodality latent aligner with the pre-trained ImageBind model. Our latent
aligner shares a similar core as the classifier guidance that guides the
diffusion denoising process during inference time. Through carefully designed
optimization strategy and loss functions, we show the superior performance of
our method on joint video-audio generation, visual-steered audio generation,
and audio-steered visual generation tasks. The project website can be found at
https://yzxing87.github.io/Seeing-and-Hearing/
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要