MMR-Mamba: Multi-Modal MRI Reconstruction with Mamba and Spatial-Frequency Information Fusion
arxiv(2024)
摘要
Multi-modal MRI offers valuable complementary information for diagnosis and
treatment; however, its utility is limited by prolonged scanning times. To
accelerate the acquisition process, a practical approach is to reconstruct
images of the target modality, which requires longer scanning times, from
under-sampled k-space data using the fully-sampled reference modality with
shorter scanning times as guidance. The primary challenge of this task is
comprehensively and efficiently integrating complementary information from
different modalities to achieve high-quality reconstruction. Existing methods
struggle with this: 1) convolution-based models fail to capture long-range
dependencies; 2) transformer-based models, while excelling in global feature
modeling, struggle with quadratic computational complexity. To address this, we
propose MMR-Mamba, a novel framework that thoroughly and efficiently integrates
multi-modal features for MRI reconstruction, leveraging Mamba's capability to
capture long-range dependencies with linear computational complexity while
exploiting global properties of the Fourier domain. Specifically, we first
design a Target modality-guided Cross Mamba (TCM) module in the spatial domain,
which maximally restores the target modality information by selectively
incorporating relevant information from the reference modality. Then, we
introduce a Selective Frequency Fusion (SFF) module to efficiently integrate
global information in the Fourier domain and recover high-frequency signals for
the reconstruction of structural details. Furthermore, we devise an Adaptive
Spatial-Frequency Fusion (ASFF) module, which mutually enhances the spatial and
frequency domains by supplementing less informative channels from one domain
with corresponding channels from the other.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要