Joint Semantic Segmentation using representations of LiDAR point clouds and camera images

Information Fusion(2024)

引用 0|浏览4
暂无评分
摘要
LiDAR and camera are two common vision sensors used in the real world, producing complementary point cloud and image data. While multimodal data has previously been found mostly in 3D detection and tracking, we aim to study large-scale semantic segmentation by multimodal data fusion rather than only knowledge transfer or distillation. We show that fusing LiDAR features with camera features and abandoning the strict point-to-pixel hard correlation can lead to better performance. Even so, it is still difficult to make full use of multimodal data due to the spatiotemporal misalignment of sensors and uneven data distribution.. To address this issue, we propose the Joint Semantic Segmentation (JoSS), a powerful LiDAR-camera fusion solution that employs the attention mechanism to explore the potential relationships between point clouds and images. Specifically, JoSS consists of commonly used 3D and 2D backbones, and lightweight transformer decoders based on point clouds and images. A point cloud decoder adopts queries to analyze the semantics from LiDAR features, and an image decoder adaptively fuses these queries with corresponding image features. Both exploit contextual information, thus fully mining multimodal information for semantic segmentation. In addition, we propose an effective unimodal data augmentation (UDA) method that performs cross-modal contrastive learning on point clouds and images to significantly improve accuracy by augmenting the point cloud alone without the complexity of generating paired samples of both modalities. Our Joss achieves advanced results in two widely used large-scale benchmarks, i.e. SemanticKITTI and nuScenes-lidarseg.
更多
查看译文
关键词
Joint 3D-2D learning,Contrastive learning,Information fusion,Large-scale semantic segmantion,Point cloud segmantion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要