Mapping medical image-text to a joint space via masked modeling

Zhihong Chen, Yuhao Du,Jinpeng Hu, Yang Liu,Guanbin Li, Xiang Wan,Tsung-Hui Chang

MEDICAL IMAGE ANALYSIS(2024)

引用 0|浏览25
暂无评分
摘要
Recently, masked autoencoders have demonstrated their feasibility in extracting effective image and text features (e.g., BERT for natural language processing (NLP) and MAE in computer vision (CV)). This study investigates the potential of applying these techniques to vision -and -language representation learning in the medical domain. To this end, we introduce a self -supervised learning paradigm, multi -modal masked autoencoders (M3AE). It learns to map medical images and texts to a joint space by reconstructing pixels and tokens from randomly masked images and texts. Specifically, we design this approach from three aspects: First, taking into account the varying information densities of vision and language, we employ distinct masking ratios for input images and text, with a notably higher masking ratio for images; Second, we utilize visual and textual features from different layers for reconstruction to address varying levels of abstraction in vision and language; Third, we develop different designs for vision and language decoders. We establish a medical visionand -language benchmark to conduct an extensive evaluation. Our experimental results exhibit the effectiveness of the proposed method, achieving state-of-the-art results on all downstream tasks. Further analyses validate the effectiveness of the various components and discuss the limitations of the proposed approach. The source code is available at https://github.com/zhjohnchan/M3AE.
更多
查看译文
关键词
Multi-modal pre-training,Masked autoencoders,Medical vision-and-language analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要