Rate-optimal denoising with deep neural networks
INFORMATION AND INFERENCE-A JOURNAL OF THE IMA(2021)
Abstract
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation. The underlying principle is that neural networks trained on large data sets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image. Given such a generator network, a noisy image can be denoised by (i) finding the closest image in the range of the generator or by (ii) passing it through an encoder-generator architecture (known as an autoencoder). However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the network parameters. In this paper, we consider the problem of denoising an image from additive Gaussian noise using the two generator-based approaches. In both cases, we assume the image is well described by a deep neural network with ReLU activations functions, mapping a k-dimensional code to an n-dimensional image. In the case of the autoencoder, we show that the feedforward network reduces noise energy by a factor of O(k/n). In the case of optimizing over the range of a generative model, we state and analyze a simple gradient algorithm that minimizes a non-convex loss function and provably reduces noise energy by a factor of O(k/n). We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data.
MoreTranslated text
Key words
deep neural networks, denoising
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined