A Depth-Bin-Based Graphical Model for Fast View Synthesis Distortion Estimation

作者:Jin, Jian; Liang, Jie; Zhao, Yao*; Lin, Chunyu; Yao, Chao; Wang, Anhong
来源:IEEE Transactions on Circuits and Systems for Video Technology, 2019, 29(6): 1754-1766.
DOI:10.1109/TCSVT.2018.2844743

摘要

During 3-D video communication, transmission errors, such as packet loss, could happen to the texture and depth sequences. View synthesis distortion will be generated when these sequences are used to synthesize virtual views according to the depth-image-based rendering method. A depth-value-based graphical model (DVGM) has been employed to achieve the accurate packet-loss-caused view synthesis distortion estimation (VSDE). However, the DVGM models the complicated view synthesis processes at depth-value level, which costs too much computation and is difficult to be applied in practice. In this paper, a depth-bin-based graphical model (DBGM) is developed, in which the complicated view synthesis processes are modeled at depth-bin level so that it can be used for the fast VSDE with 1-D parallel camera configuration. To this end, several depth values are fused into one depth bin, and a depth-bin-oriented rule is developed to handle the warping competition process. Then, the properties of the depth bin are analyzed and utilized to form the DBGM. Finally, a conversion algorithm is developed to convert the per-pixel input depth value probability distribution into the depth-bin format. Experimental results verify that our proposed method is 8-32 x faster and requires 17%-60% less memory than the DVGM, with exactly the same accuracy.