Modality Prompts for Arbitrary Modality Salient Object Detection
arxiv(2024)
摘要
This paper delves into the task of arbitrary modality salient object
detection (AM SOD), aiming to detect salient objects from arbitrary modalities,
eg RGB images, RGB-D images, and RGB-D-T images. A novel modality-adaptive
Transformer (MAT) will be proposed to investigate two fundamental challenges of
AM SOD, ie more diverse modality discrepancies caused by varying modality types
that need to be processed, and dynamic fusion design caused by an uncertain
number of modalities present in the inputs of multimodal fusion strategy.
Specifically, inspired by prompt learning's ability of aligning the
distributions of pre-trained models to the characteristic of downstream tasks
by learning some prompts, MAT will first present a modality-adaptive feature
extractor (MAFE) to tackle the diverse modality discrepancies by introducing a
modality prompt for each modality. In the training stage, a new modality
translation contractive (MTC) loss will be further designed to assist MAFE in
learning those modality-distinguishable modality prompts. Accordingly, in the
testing stage, MAFE can employ those learned modality prompts to adaptively
adjust its feature space according to the characteristics of the input
modalities, thus being able to extract discriminative unimodal features. Then,
MAFE will present a channel-wise and spatial-wise fusion hybrid (CSFH) strategy
to meet the demand for dynamic fusion. For that, CSFH dedicates a channel-wise
dynamic fusion module (CDFM) and a novel spatial-wise dynamic fusion module
(SDFM) to fuse the unimodal features from varying numbers of modalities and
meanwhile effectively capture cross-modal complementary semantic and detail
information, respectively. Moreover, CSFH will carefully align CDFM and SDFM to
different levels of unimodal features based on their characteristics for more
effective complementary information exploitation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要