LocalMamba: Visual State Space Model with Windowed Selective Scan
arxiv(2024)
摘要
Recent advancements in state space models, notably Mamba, have demonstrated
significant progress in modeling long sequences for tasks like language
understanding. Yet, their application in vision tasks has not markedly
surpassed the performance of traditional Convolutional Neural Networks (CNNs)
and Vision Transformers (ViTs). This paper posits that the key to enhancing
Vision Mamba (ViM) lies in optimizing scan directions for sequence modeling.
Traditional ViM approaches, which flatten spatial tokens, overlook the
preservation of local 2D dependencies, thereby elongating the distance between
adjacent tokens. We introduce a novel local scanning strategy that divides
images into distinct windows, effectively capturing local dependencies while
maintaining a global perspective. Additionally, acknowledging the varying
preferences for scan patterns across different network layers, we propose a
dynamic method to independently search for the optimal scan choices for each
layer, substantially improving performance. Extensive experiments across both
plain and hierarchical models underscore our approach's superiority in
effectively capturing image representations. For example, our model
significantly outperforms Vim-Ti by 3.1
Code is available at: https://github.com/hunto/LocalMamba.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要