Adversarial Keyword Extraction and Semantic-Spatial Feature Aggregation for Clinical Report Guided Thyroid Nodule Segmentation

PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XIII(2024)

引用 0|浏览13
暂无评分
摘要
Existing thyroid nodule segmentation methods are primarily developed based on ultrasound images, which generally neglects the clinical reports that include rich semantic information for nodules. However, current text guided segmentation methods for natural images are not applicable to the image-report thyroid nodule dataset, due to the many-to-one correspondence between images and reports in current data. To this end, we propose a clinical report guided thyroid nodule segmentation framework with Adversarial Keyword Extraction (AKE) module to extract keywords from reports and Semantic-Spatial Feature Aggregation (SSFA) module to integrate reports into the segmentation model. To alleviate the many-to-one correspondence issue, we devise the AKE module to highlight the keywords about current ultrasound images from clinical reports with a keywords mask, which adopts adversarial learning to encourage the mask generator to mask out the useful descriptions to boost segmentation performance. We further propose the SSFA module to effectively and efficiently map semantic information from reports to each pixel of spatial features, so as to emphasize the target regions. Moreover, we manually collect a clinical Reports Assisted Thyroid Nodule segmentation dataset (RATN), which includes the ultrasound images, the pixel-wise nodule segmentation annotation, and the clinical reports. Extensive experiments have been conducted on the RATN dataset, and the results prove the effectiveness and computational efficiency of the proposed method over the existing methods. Code and data are available at https://github.com/cvi- szu.
更多
查看译文
关键词
Thyroid Nodule Segmentation,Clinical Report,Adversarial Keyword Extraction,Feature Aggregation,Ultrasound Image
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要