TR2013-085
View Synthesis Prediction Using Adaptive Depth Quantization for 3D Video Coding
-
- "View Synthesis Prediction Using Adaptive Depth Quantization for 3D Video Coding", IEEE International Conference on Image Processing (ICIP), September 2013.BibTeX TR2013-085 PDF
- @inproceedings{Zou2013sep2,
- author = {Zou, F. and Tian, D. and Vetro, A. and Ortega, A.},
- title = {View Synthesis Prediction Using Adaptive Depth Quantization for 3D Video Coding},
- booktitle = {IEEE International Conference on Image Processing (ICIP)},
- year = 2013,
- month = sep,
- url = {https://www.merl.com/publications/TR2013-085}
- }
,
- "View Synthesis Prediction Using Adaptive Depth Quantization for 3D Video Coding", IEEE International Conference on Image Processing (ICIP), September 2013.
-
MERL Contact:
-
Research Area:
Digital Video
Abstract:
Advanced multiview video systems are able to generate intermediate viewpoints of a 3D scene. In addition to the texture content, corresponding depth is associated with each viewpoint. To improve the coding efficiency of such content, view synthesis prediction can be used to further reduce inter-view redundancy in addition to traditional disparity compensated prediction. However, the predictor generated from the view synthesis process is affected by several factors, including signal properties of the texture, the accuracy of the depth and complexity of the scene, as well as coding errors in both the texture and depth. This paper presents an analysis of view synthesis prediction performance considering these factors. Based on this analysis, an adaptive depth quantization scheme is proposed to improve the depth coding, leading to better view synthesis prediction and overall coding efficiency gains. The proposed scheme is able to achieve an average bit rate savings of 0.9% on the coded and synthesized video with a maximum gain of up to 11.7% on the dependent views in the context of an HEVC-based codec.