Hybrid Sparse and Dense Attentions of Similar Regions for Image Denoising

crossref(2024)

引用 0|浏览0
暂无评分
摘要
Abstract Self-attention based on dot-product has achieved great success in the field of computer vision. Its effectiveness is owed to the large capacity of capturing long-range dependencies in feature maps. However, its quadratic computational complexity with respect to the image size hinders the further application of the self-attention modules. Therefore, a variety of strategies, which limit the regions of the computation of dot-product, have been proposed to reduce the computational amount. Through analyzing the advantages and disadvantages of these methods, we introduce a hybrid sparse and dense attention module (HSDA) which adopts the dense dot-product attention in most similar regions and the sparse attention in other regions. Numerical experiments on image denoising demonstrate that the designed HSDA module has the advantage of both sparse and local dense attentions, and can obtain similar PSNRs to full attention at lower computational amount. The corresponding network constructed by the HSDA modules can product favorable results compared to many state-of-the-art methods.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要