LSK3DNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels
CoRR(2024)
摘要
Autonomous systems need to process large-scale, sparse, and irregular point
clouds with limited compute resources. Consequently, it is essential to develop
LiDAR perception methods that are both efficient and effective. Although
naively enlarging 3D kernel size can enhance performance, it will also lead to
a cubically-increasing overhead. Therefore, it is crucial to develop
streamlined 3D large kernel designs that eliminate redundant weights and work
effectively with larger kernels. In this paper, we propose an efficient and
effective Large Sparse Kernel 3D Neural Network (LSK3DNet) that leverages
dynamic pruning to amplify the 3D kernel size. Our method comprises two core
components: Spatial-wise Dynamic Sparsity (SDS) and Channel-wise Weight
Selection (CWS). SDS dynamically prunes and regrows volumetric weights from the
beginning to learn a large sparse 3D kernel. It not only boosts performance but
also significantly reduces model size and computational cost. Moreover, CWS
selects the most important channels for 3D convolution during training and
subsequently prunes the redundant channels to accelerate inference for 3D
vision tasks. We demonstrate the effectiveness of LSK3DNet on three benchmark
datasets and five tracks compared with classical models and large kernel
designs. Notably, LSK3DNet achieves the state-of-the-art performance on
SemanticKITTI (i.e., 75.6
roughly 40
compared to the naive large 3D kernel model.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要