Cross-Modal Matching and Adaptive Graph Attention Network for RGB-D Scene Recognition

ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)

引用 0|浏览12
暂无评分
摘要
Despite the significant advances in RGB-D scene recognition, there are several major limitations that need further investigation. For example, simply extracting modal-specific features neglects the complex relationships among multiple modalities of features. Moreover, cross-modal features have not been considered in most existing methods. To address these concerns, we propose to integrate the tasks of cross-modal matching and modal-specific recognition, termed as Matching-to-Recognition Network (MRNet). Specifically, the cross-modal matching network enhances the descriptive power of the recognition network via a layer-wise semantic loss. The recognition network obtains multi-modal features from a two-stream CNN: global features are obtained by a higher-layer of a CNN to preserve the semantic content, and local layout features are learned by the graph attention network, thus better capturing the key object regions and modelling their relationships. Extensive experiments results demonstrate the MRNet achieves superior performance to state-of-the-art methods, especially for recognition solely based on single modality.
更多
查看译文
关键词
RGB-D scene recognition,modal-specific features,cross-modal matching,graph attention network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要