Modality-Agnostic Structural Image Representation Learning for Deformable Multi-Modality Medical Image Registration
CVPR 2024(2024)
摘要
Establishing dense anatomical correspondence across distinct imaging
modalities is a foundational yet challenging procedure for numerous medical
image analysis studies and image-guided radiotherapy. Existing multi-modality
image registration algorithms rely on statistical-based similarity measures or
local structural image representations. However, the former is sensitive to
locally varying noise, while the latter is not discriminative enough to cope
with complex anatomical structures in multimodal scans, causing ambiguity in
determining the anatomical correspondence across scans with different
modalities. In this paper, we propose a modality-agnostic structural
representation learning method, which leverages Deep Neighbourhood
Self-similarity (DNS) and anatomy-aware contrastive learning to learn
discriminative and contrast-invariance deep structural image representations
(DSIR) without the need for anatomical delineations or pre-aligned training
images. We evaluate our method on multiphase CT, abdomen MR-CT, and brain MR
T1w-T2w registration. Comprehensive results demonstrate that our method is
superior to the conventional local structural representation and
statistical-based similarity measures in terms of discriminability and
accuracy.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要