Center-bridged Interaction Fusion for hyperspectral and LiDAR classification

Neurocomputing(2024)

Cited 0|Views13
No score
Abstract
Recent classifications in Earth Observation (EO) commonly involve a combination of Hyperspectral Image (HSI) and Light Detection and Ranging (LiDAR) signals. However, many current methods fail to consider the HSI-LiDAR information concurrently, especially in terms of both its intra- and inter-modality aspects. Additionally, current methods are generally limited in their ability to fuse the features extracted from different modalities. Hence, this paper proposes a center-bridged framework, called Interaction Fusion (IF), that can leverage diverse information concerning the intra- and inter-modality relationships at the same time. More specifically, intra- and inter-modality information can be enriched by introducing the center patch of HSI (cp-HSI) as an extra input, This introduces additional contextual information within and across modalities that can be leverage for deeper insights. Further, we propose a fusion matrix as a structural feature map designed to integrate nine views generated by a view generator, enabling the adaptive combination of intra- and inter-modality information. Overall, our approach allows potential patterns to be captured, while mitigating any bias resulting from incomplete information. Extensive experiments conducted on three widely recognized datasets—Trento, MUUFL, and Houston—demonstrate that the IF framework achieves state-of-the-art results, surpassing existing methods.
More
Translated text
Key words
Hyperspectral image classification,Light detection and ranging,Multi-sensor,Transformer,Cross-modal attention
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined