ContourDiff: Unpaired Image Translation with Contour-Guided Diffusion Models
arxiv(2024)
摘要
Accurately translating medical images across different modalities (e.g., CT
to MRI) has numerous downstream clinical and machine learning applications.
While several methods have been proposed to achieve this, they often prioritize
perceptual quality with respect to output domain features over preserving
anatomical fidelity. However, maintaining anatomy during translation is
essential for many tasks, e.g., when leveraging masks from the input domain to
develop a segmentation model with images translated to the output domain. To
address these challenges, we propose ContourDiff, a novel framework that
leverages domain-invariant anatomical contour representations of images. These
representations are simple to extract from images, yet form precise spatial
constraints on their anatomical content. We introduce a diffusion model that
converts contour representations of images from arbitrary input domains into
images in the output domain of interest. By applying the contour as a
constraint at every diffusion sampling step, we ensure the preservation of
anatomical content. We evaluate our method by training a segmentation model on
images translated from CT to MRI with their original CT masks and testing its
performance on real MRIs. Our method outperforms other unpaired image
translation methods by a significant margin, furthermore without the need to
access any input domain information during training.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要