PointMamba: A Simple State Space Model for Point Cloud Analysis
CoRR(2024)
Abstract
Transformers have become one of the foundational architectures in point cloud
analysis tasks due to their excellent global modeling ability. However, the
attention mechanism has quadratic complexity, making the design of a linear
complexity method with global modeling appealing. In this paper, we propose
PointMamba, transferring the success of Mamba, a recent representative state
space model (SSM), from NLP to point cloud analysis tasks. Unlike traditional
Transformers, PointMamba employs a linear complexity algorithm, presenting
global modeling capacity while significantly reducing computational costs.
Specifically, our method leverages space-filling curves for effective point
tokenization and adopts an extremely simple, non-hierarchical Mamba encoder as
the backbone. Comprehensive evaluations demonstrate that PointMamba achieves
superior performance across multiple datasets while significantly reducing GPU
memory usage and FLOPs. This work underscores the potential of SSMs in 3D
vision-related tasks and presents a simple yet effective Mamba-based baseline
for future research. The code is available at
https://github.com/LMD0311/PointMamba.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined