ZMFF: Zero-shot multi-focus image fusion

Information Fusion(2023)

引用 17|浏览238
暂无评分
摘要
Multi-focus image fusion (MFF) is an effective way to eliminate the out-of-focus blur generated in the imaging process. The difficulties in distinguishing different blur levels and the lack of real supervised data make multi-focus image fusion remain a challenging task after decades of research. According to deep image prior (DIP) (Ulyanov et al., 2018), a neural network itself can capture the low-level statistics of a single image and is successfully used as a prior for solving many inverse problems without the need for handmade priors or priors learned from large-scale datasets. Motivated by this idea, we propose a novel multi-focus image fusion framework named ZMFF comprised of a deep image prior network to model the deep prior of the fused image and a deep mask prior network to model the deep prior of the focus map corresponding to each source image. Without the labor-intensive training pair collection, our method achieves zero-shot learning and avoids the domain shifting problem due to the inconsistency between the manually degraded multi-focus images and the real ones. As far as we know, it is the first unsupervised and untrained deep model for the MFF task. Extensive experiments on both synthetic and real-world datasets demonstrate the promising performance, generalization and flexibility of our approach. Source code is available at https://github.com/junjun-jiang/ZMFF.
更多
查看译文
关键词
Multi-focus image fusion,Deep image prior,Deep convolutional neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要