Learned Image Compression with Multi-Scale Spatial and Contextual Information Fusion.

ICIP(2022)

Cited 0|Views6
No score
Abstract
Although learned image compression based on convolution neural network and hyperprior makes significant progress, the distinction between original and reconstructed images is still obvious. In order to reconstruct compressed image with higher quality, a novel model based on fusing multi-scale spatial and context information is proposed in this work. Since spatial information might be dropped during the forward propagation when neural networks go deeper, a multi-scale information fusion module is designed to help the encoder to retain the necessary spatial information while removing the redundancy in latent representation. Meanwhile, a multi-scale 3D context module with varying-sized masked 3D convolution kernels is devised to obtain multi-scale correlation in latent representation. The experiments demonstrate the superiority of the proposed approach over a number of state-of-the-art image compression methods and the versatile video coding.
More
Translated text
Key words
image compression,contextual information fusion,spatial,multi-scale
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined