Can Textual Semantics Mitigate Sounding Object Segmentation Preference?
arxiv(2024)
摘要
The Audio-Visual Segmentation (AVS) task aims to segment sounding objects in
the visual space using audio cues. However, in this work, it is recognized that
previous AVS methods show a heavy reliance on detrimental segmentation
preferences related to audible objects, rather than precise audio guidance. We
argue that the primary reason is that audio lacks robust semantics compared to
vision, especially in multi-source sounding scenes, resulting in weak audio
guidance over the visual space. Motivated by the the fact that text modality is
well explored and contains rich abstract semantics, we propose leveraging text
cues from the visual scene to enhance audio guidance with the semantics
inherent in text. Our approach begins by obtaining scene descriptions through
an off-the-shelf image captioner and prompting a frozen large language model to
deduce potential sounding objects as text cues. Subsequently, we introduce a
novel semantics-driven audio modeling module with a dynamic mask to integrate
audio features with text cues, leading to representative sounding object
features. These features not only encompass audio cues but also possess vivid
semantics, providing clearer guidance in the visual space. Experimental results
on AVS benchmarks validate that our method exhibits enhanced sensitivity to
audio when aided by text cues, achieving highly competitive performance on all
three subsets. Project page:
\href{https://github.com/GeWu-Lab/Sounding-Object-Segmentation-Preference}{https://github.com/GeWu-Lab/Sounding-Object-Segmentation-Preference}
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要