Self-Supervised Pre-Training for Deep Image Prior-Based Robust PET Image Denoising

IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES(2024)

引用 0|浏览15
暂无评分
摘要
Deep image prior (DIP) has been successfully applied to positron emission tomography (PET) image restoration, enabling represent implicit prior using only convolutional neural network architecture without training dataset, whereas the general supervised approach requires massive low- and high-quality PET image pairs. To answer the increased need for PET imaging with DIP, it is indispensable to improve the performance of the underlying DIP itself. Here, we propose a self-supervised pre-training model to improve the DIP-based PET image denoising performance. Our proposed pre-training model acquires transferable and generalizable visual representations from only unlabeled PET images by restoring various degraded PET images in a self-supervised approach. We evaluated the proposed method using clinical brain PET data with various radioactive tracers (F-18-florbetapir, C-11-Pittsburgh compound-B, F-18-fluoro-2-deoxy-D-glucose, and O-15-CO2) acquired from different PET scanners. The proposed method using the self-supervised pre-training model achieved robust and state-of-the-art denoising performance while retaining spatial details and quantification accuracy compared to other unsupervised methods and pre-training model. These results highlight the potential that the proposed method is particularly effective against rare diseases and probes and helps reduce the scan time or the radiotracer dose without affecting the patients.
更多
查看译文
关键词
Deep image prior (DIP),positron emission tomography (PET) image denoising,pretraining,self-supervised learning,unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要