Exploring Pre-trained Text-to-Video Diffusion Models for Referring Video Object Segmentation
arxiv(2024)
摘要
In this paper, we explore the visual representations produced from a
pre-trained text-to-video (T2V) diffusion model for video understanding tasks.
We hypothesize that the latent representation learned from a pretrained
generative T2V model encapsulates rich semantics and coherent temporal
correspondences, thereby naturally facilitating video understanding. Our
hypothesis is validated through the classic referring video object segmentation
(R-VOS) task. We introduce a novel framework, termed “VD-IT”, tailored with
dedicatedly designed components built upon a fixed pretrained T2V model.
Specifically, VD-IT uses textual information as a conditional input, ensuring
semantic consistency across time for precise temporal instance matching. It
further incorporates image tokens as supplementary textual inputs, enriching
the feature set to generate detailed and nuanced masks.Besides, instead of
using the standard Gaussian noise, we propose to predict the video-specific
noise with an extra noise prediction module, which can help preserve the
feature fidelity and elevates segmentation quality. Through extensive
experiments, we surprisingly observe that fixed generative T2V diffusion
models, unlike commonly used video backbones (e.g., Video Swin Transformer)
pretrained with discriminative image/video pre-tasks, exhibit better potential
to maintain semantic alignment and temporal consistency. On existing standard
benchmarks, our VD-IT achieves highly competitive results, surpassing many
existing state-of-the-art methods. The code will be available at
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要