Back to Search Start Over

Blind compression artifact reduction using dense parallel convolutional neural network.

Authors :
Amaranageswarao, Gadipudi
Deivalakshmi, S.
Ko, Seok-Bum
Source :
Signal Processing: Image Communication. Nov2020, Vol. 89, pN.PAG-N.PAG. 1p.
Publication Year :
2020

Abstract

Different artifacts will manifest, whenever an image is compressed by a lossy compression algorithm. Higher frequency details present in the image may tend to be eliminated by compression. In certain cases, compression may introduce small image structures and noise. This phenomenon will limit the image quality thereby making images to appear much less pleasant to the human eye. Furthermore, other machine learning tasks like object detectors performance will be reduced due to compression. In this paper, we introduce a novel deep neural network with densely connected parallel convolutions to remove such artifacts and to recover the original image from its perturbed version. The proposed algorithm is named as densely connected parallel convolutional neural network in short DPCNN. Parallel convolution provides model parallelism and reduce the training burden. Furthermore, the dense skip connections provide short paths for gradient back-propagation and alleviates the gradient vanishing problem. Moreover, skip connections reduce the feature redundancy by combining features from different levels and increases the learning efficiency. However, these skip connections increase the model complexity. A bottleneck layer is used to keep the model compactness and to reduce the model complexity. The proposed approach can be used as a preprocessing step in different computer vision tasks where images are degraded by compression. Different from other methods, the proposed method is able to remove compression artifacts generated at any quality factor (QF). The experiments on benchmark datasets show the superiority of the proposed method over other methods quantitatively and qualitatively. • Parallel convolutional layer provides model parallelism and achieves faster training. • Efficient flow of information and gradients is achieved with dense skip connections. • Reconstruction accuracy is improved with collective knowledge of the feature representations. • Bottleneck layer reduces the computational complexity and improves the model efficiency. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09235965
Volume :
89
Database :
Academic Search Index
Journal :
Signal Processing: Image Communication
Publication Type :
Academic Journal
Accession number :
146396925
Full Text :
https://doi.org/10.1016/j.image.2020.116009