1. Blind 360-degree image quality assessment via saliency-guided convolution neural network
- Author
-
Miaomiao Qiu and Feng Shao
- Subjects
business.industry ,Computer science ,Image quality ,Feature extraction ,Pattern recognition ,02 engineering and technology ,021001 nanoscience & nanotechnology ,01 natural sciences ,Convolutional neural network ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,010309 optics ,Feature (computer vision) ,Distortion ,0103 physical sciences ,Equirectangular projection ,Artificial intelligence ,Electrical and Electronic Engineering ,0210 nano-technology ,business ,Projection (set theory) ,Network model - Abstract
With the rapid development of virtual reality (VR) technologies, quality assessment of 360-degree images has become increasingly urgent. Unlike the traditional 2D images, the distortion is not evenly distributed in the 360-degree images, e.g., the projection distortion in the polar region of the Equirectangular projection (ERP) is more serious than that in other regions. Thus, the traditional 2D quality model cannot be directly applied to 360-degree images. In this paper, we propose a saliency-guided CNN model for blind 360-degree image quality assessment (SG360BIQA), which is mainly composed of a saliency prediction network (SP-Net) and a feature extraction network (F-Net). By training the whole network with the two sub-networks together, more discriminant features can be extracted and the mapping from feature representations to quality scores can be established more accurately. Besides, due to the lack of sufficient large 360-degree images database, we use a pre-trained network model instead of the initial random parameter model to overcome the limitation. Experimental results on two public 360-IQA databases demonstrate that our proposed model outperforms state-of-the-art full-reference and no-reference IQA metrics in terms of generalization ability and evaluation accuracy.
- Published
- 2021