Back to Search Start Over

A Depth-Bin-Based Graphical Model for Fast View Synthesis Distortion Estimation.

Authors :
Jin, Jian
Liang, Jie
Zhao, Yao
Lin, Chunyu
Yao, Chao
Wang, Anhong
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Jun2019, Vol. 29 Issue 6, p1754-1766. 13p.
Publication Year :
2019

Abstract

During 3-D video communication, transmission errors, such as packet loss, could happen to the texture and depth sequences. View synthesis distortion will be generated when these sequences are used to synthesize virtual views according to the depth-image-based rendering method. A depth-value-based graphical model (DVGM) has been employed to achieve the accurate packet-loss-caused view synthesis distortion estimation (VSDE). However, the DVGM models the complicated view synthesis processes at depth-value level, which costs too much computation and is difficult to be applied in practice. In this paper, a depth-bin-based graphical model (DBGM) is developed, in which the complicated view synthesis processes are modeled at depth-bin level so that it can be used for the fast VSDE with 1-D parallel camera configuration. To this end, several depth values are fused into one depth bin, and a depth-bin-oriented rule is developed to handle the warping competition process. Then, the properties of the depth bin are analyzed and utilized to form the DBGM. Finally, a conversion algorithm is developed to convert the per-pixel input depth value probability distribution into the depth-bin format. Experimental results verify that our proposed method is 8– $32\times $ faster and requires 17%–60% less memory than the DVGM, with exactly the same accuracy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
29
Issue :
6
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
136847398
Full Text :
https://doi.org/10.1109/TCSVT.2018.2844743