QDFormer: Towards Robust Audiovisual Segmentation in Complex Environments with Quantization-based Semantic Decomposition
arxiv(2023)
摘要
Audiovisual segmentation (AVS) is a challenging task that aims to segment
visual objects in videos according to their associated acoustic cues. With
multiple sound sources and background disturbances involved, establishing
robust correspondences between audio and visual contents poses unique
challenges due to (1) complex entanglement across sound sources and (2)
frequent changes in the occurrence of distinct sound events. Assuming sound
events occur independently, the multi-source semantic space can be represented
as the Cartesian product of single-source sub-spaces. We are motivated to
decompose the multi-source audio semantics into single-source semantics for
more effective interactions with visual content. We propose a semantic
decomposition method based on product quantization, where the multi-source
semantics can be decomposed and represented by several disentangled and
noise-suppressed single-source semantics. Furthermore, we introduce a
global-to-local quantization mechanism, which distills knowledge from stable
global (clip-level) features into local (frame-level) ones, to handle frequent
changes in audio semantics. Extensive experiments demonstrate that our
semantically decomposed audio representation significantly improves AVS
performance, e.g., +21.2
ResNet50 backbone. https://github.com/lxa9867/QSD.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要