Perceptually Improved T1-T2 MRI Translations Using Conditional Generative Adversarial Networks

MEDICAL IMAGING 2022: IMAGE PROCESSING(2022)

引用 0|浏览10
暂无评分
摘要
Magnetic Resonance Imaging (MRI) encompasses a set of powerful imaging techniques for understanding brain structure and diagnosing pathology. Various MRI sequences including T1- and T2-weighted provide rich complementary information. However, significant equipment costs and acquisition times have inhibited uptake of this critical technology, adversely impacting health equity globally. To ameliorate these costs associated with brain MRIs, we present pTransGAN, a generative adversarial network (GAN) capable of translating both healthy and unhealthy T1 scans into T2 scans, potentially obviating T2 acquisition. Extending prior GAN-based image translation, we show that the addition of non-adversarial losses, like style and content loss, improves the translations provided, especially making the generated images sharper, and making the model more robust. Additionally in previous studies, separate models have been created for healthy and unhealthy brain MRI. Thus here, we also present a novel simultaneous training protocol that allows pTransGAN to concurrently train on healthy and unhealthy data sampled from two open brain MRI datasets. As measured by novel metrics that closely match perceptual similarity of human observers, our simultaneously trained pTransGAN model outperforms the models individually trained on just healthy or unhealthy data. These encouraging results should be further validated with independent paired and unpaired clinical datasets.
更多
查看译文
关键词
Generative/adversarial learning,Image synthesis,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要