Dynamic feature distillation and pyramid split large kernel attention network for lightweight image super-resolution

Bingzan Liu,Xin Ning,Shichao Ma, Yizhen Yang

Multimedia Tools and Applications(2024)

Cited 0|Views0
No score
Abstract
With the development of edge intelligent devices such as unmanned aerial vehicles (UAVs), the demand of for high-resolution (HR) images increase significantly. However, noise and blurring from finite inspector sizes and optics make high-resolution images difficult to acquire directly. Therefore, lightweight super-resolution of optical images based on convolutional neural network (CNN) has become a hot spot. While most state-of-the-art methods pay more attention to local features on a particular dimension. Though composite attention mechanism has employed in them, the conflict among features of different attention types affects the SR performance thoroughly. In this paper, we propose a dynamic feature distillation and pyramid split large kernel attention network (DPLKA) to solve such problems. In particular, a pyramid split large kernel attention module (PSLKA) is introduced to obtain the multi-scale global information and long-range dependence. Subsequently, by constructing a global-to-local feature extraction block (GL-FEB), a global-to-local feature extraction approach similar to swin transformer with multi-scale self-attention is established. Furthermore, a dynamic feature distillation block (DFDB) is considered in this model with the purpose of utilizing hierarchical features from different layers and realizing adaptive recalibration of different response. Specifically, DPLKA applies lightweight architecture such as depth-wise separable convolution (SDC) and distillation feature extraction module (DFEM) which greatly improves the effectiveness of the method. Extensive experimental results on five benchmark datasets indicate that DPLKA is dominant in reconstruction accuracy (0.21 2 dB in Urban100 dataset with the scale of × 4 ), excellent running time (0.047 s in Urban100 dataset) and recipient parameters and flops.
More
Translated text
Key words
Single image super-resolution,Large kernel attention,Pyramid split module,Lightweight network
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined