摘要

View synthesis prediction (VSP) is a crucial coding tool for improving compression efficiency in the next generation 3D video systems. However, VSP is susceptible to catastrophic error propagation when multi-view video plus depth (MVD) data are transmitted over lossy networks. This paper aims at accurately modeling the transmission errors propagated in the inter-view direction caused by VSP. Toward this end, we first study how channel errors gradually propagate along the VSP-based inter-view prediction path. Then, a new recursive model is formulated to estimate the expected end-to-end distortion caused by those channel losses. For the proposed model, the compound impact of the transmission distortions of both the texture video and depth map on the quality of the synthetic reference view is mathematically analyzed. Especially, the expected view synthesis distortion due to depth errors is characterized in the frequency domain using a new approach, which combines the energy densities of the reconstructed texture image and the channel errors. The proposed model also explicitly considers the disparity rounding operation invoked for the sub-pixel precision rendering of the synthesized reference view. Experimental results are presented to demonstrate that the proposed analytic model is capable of effectively modeling the channel-induced distortion for MVD-based 3D video transmission.