Masked Image Modelling for retinal OCT understanding
CoRR(2024)
摘要
This work explores the effectiveness of masked image modelling for learning
representations of retinal OCT images. To this end, we leverage Masked
Autoencoders (MAE), a simple and scalable method for self-supervised learning,
to obtain a powerful and general representation for OCT images by training on
700K OCT images from 41K patients collected under real world clinical settings.
We also provide the first extensive evaluation for a model of OCT on a
challenging battery of 6 downstream tasks. Our model achieves strong
performance when fully finetuned but can also serve as a versatile frozen
feature extractor for many tasks using lightweight adapters. Furthermore, we
propose an extension of the MAE pretraining to fuse OCT with an auxiliary
modality, namely, IR fundus images and learn a joint model for both. We
demonstrate our approach improves performance on a multimodal downstream
application. Our experiments utilize most publicly available OCT datasets, thus
enabling future comparisons. Our code and model weights are publicly available
https://github.com/TheoPis/MIM_OCT.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要