2012 19th IEEE International Conference on Image Processing 2012
DOI: 10.1109/icip.2012.6467536
|View full text |Cite
|
Sign up to set email alerts
|

On modeling the rendering error in 3D video

Abstract: We propose an analytical model to estimate the rendering quality in 3D video. The model relates errors in the depth images to the rendering quality, taking into account texture image characteristics, texture image quality, the camera configuration and the rendering process. Specifically, we derive position (disparity) errors from the depth errors, and the probability distribution of the position errors is used to calculate the power spectral density of the rendering errors. Experiment results with video sequen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…Takahashi [10] proposed an optimized view interpolation scheme based on frequency domain analysis of depth map error. Cheung et al [11] proposed to estimate the synthesis quality using power spectral density (PSD).…”
Section: Previous Workmentioning
confidence: 99%
“…Takahashi [10] proposed an optimized view interpolation scheme based on frequency domain analysis of depth map error. Cheung et al [11] proposed to estimate the synthesis quality using power spectral density (PSD).…”
Section: Previous Workmentioning
confidence: 99%
“…This frequency domain analysis assumed that the underlying image signals are spatial invariant (i.e., wide-sense stationary), which we found that in the current application this would cause rather significant estimation discrepancy (In [16], we used a sequence specific constant to compensate this discrepancy). Specifically, across strong texture edges the video contents change much more quickly than the non-edge regions, which does not agree with the spatial invariant assumption.…”
Section: Estimate Noise Power Due To Depth Codingmentioning
confidence: 99%
“…Let V = U − W be the noise in the rendering output due to coding errors in texture/depth images. In [16], we show that under reasonable assumptions the total synthesis noise power (E[V 2 ]) can be estimated by summing two components: one is the synthesis noise power due to texture image coding (E[N 2 ]), the other is the synthesis noise power due to depth image coding…”
Section: Estimate Noise Power Due To Depth Codingmentioning
confidence: 99%
See 2 more Smart Citations