谷歌浏览器插件
订阅小程序
在清言上使用

AW-Net: A Novel Fully Connected Attention-based Medical Image Segmentation Model

2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW(2023)

引用 0|浏览5
暂无评分
摘要
Multimodal medical imaging poses a unique challenge to the data scientist looking at that data, since it is not only voluminous, but also extremely heterogenous. In this paper, we have proposed a novel fully connected AW-Net which provides a solution to problem of segmenting multimodal 3D/4D medical images by incorporating a novel regularized transient block. The AW-Net uses the concept of stacking of consecutive 2D image slices to extract spatial information for segmentation. Furthermore, dropout layers are incorporated to reduce the computational cost without affecting the accuracy of the output predicted masks. The AW-Net has been tested on benchmark datasets such as BRATS2020 for brain MRI, RSNA2022 cervical spine dataset for spine CT followed by DUKE and QIN dataset for breast MRI and PET respectively. The AW-Net achieves a Dice similarity coefficient (DSC) of 81.3% and 80.5% for breast cancer segmentation from DCE and T1 images, 89.6% as an average of three segmented tumor classes for brain tumor segmentation from BraTS2020 dataset, 93.7% for breast tumor segmentation from breast PET images, and 71.9% for cervical fracture localization on the RSNA 2022 challenge. These evaluation experiments performed on public datasets indicate that the proposed AW-Net is a generalized, reproducible, efficient, and highly accurate model capable of segmenting and localizing anomalies in any multi-modal 3D/4D medical imaging data from small and large data sets. The GitHub link is available at: https://github.com/Dynamo13/AW-Net.
更多
查看译文
关键词
Deep Learning,Medical Image,Segmentation,Attention,Multimodal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要