Trimodal Fusion Network Combined Global-Local Feature Extraction Strategy and Spatial-Frequency Fusion Strategy.

Danyang Yao, Jinyu Wen,Amei Chen,Meie Fang,Xinhua Wei,Zhigeng Pan

ML4CS (3)(2022)

引用 1|浏览5
暂无评分
摘要
Two or more images are fused into one to complement each other’s information, which is conducive to assisting doctors in locating disease types, conditions, and lesions. Most of the existing methods focus on the fusion of two modalities, and in fact, it is very common for the disease that requires the fusion of trimodal to assist diagnosis. This paper proposes a high-precision trimodal medical image fusion network. According to the information characteristics of anatomical images and functional images in medical images, we designed the global texture module (GTM) and the local detail module (LDM) for feature extraction simultaneously. The fusion strategy fully combined the advantages of spatial domain and frequency domain to retain more complete texture detail information and global contour information. At the same time, the multi-attention mechanism is adopted to extract more effective depth features and more accurate location information. Experimental results show that the proposed method is effective in both subjective vision and objective index evaluation.
更多
查看译文
关键词
fusion,feature extraction,global-local,spatial-frequency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要