1. Rate-optimal denoising with deep neural networks.
- Author
-
Heckel, Reinhard, Huang, Wen, Hand, Paul, and Voroninski, Vladislav
- Subjects
- *
IMAGE denoising , *BIG data , *RANDOM noise theory , *IMAGE representation , *NETWORK performance - Abstract
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation. The underlying principle is that neural networks trained on large data sets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image. Given such a generator network, a noisy image can be denoised by (i) finding the closest image in the range of the generator or by (ii) passing it through an encoder-generator architecture (known as an autoencoder). However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the network parameters. In this paper, we consider the problem of denoising an image from additive Gaussian noise using the two generator-based approaches. In both cases, we assume the image is well described by a deep neural network with ReLU activations functions, mapping a |$k$| -dimensional code to an |$n$| -dimensional image. In the case of the autoencoder, we show that the feedforward network reduces noise energy by a factor of |$O(k/n)$|. In the case of optimizing over the range of a generative model, we state and analyze a simple gradient algorithm that minimizes a non-convex loss function and provably reduces noise energy by a factor of |$O(k/n)$|. We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF