This paper provides a motion-based contentadaptive depth map enhancement algorithm to enhance the quality of the depth map and reduce the artifacts in the synthesized views. The proposed algorithm extracts depth cues from the motion distribution at the specific scenario of camera movement to align the distribution of depth and motion. In real world scenarios, when the camera is panning in horizontal direction, the nearer distance between the object and the camera, the larger motion will be, and vice versa; therefore, we could interpret the depth from motion in this. Moreover, in the scenario of fixed camera, the depth cue from motion could be derived in the same approach, and the depth variation within one moving object shall be small. Hence, the depth values of moving object should not be rapidly changing. In addition, this paper also employs the bi-directional motion-compensated infinite impulse response low-pass filter to stabilize the consistency of depth maps over time. As a consequence, the algorithm so introduced not only aligns the depth map to depth cues from motion but also enhance stability and consistency of depth maps in the spatial-temporal domain. Experiment results via enhanced depth maps show that the synthesized results would be better in both objective and subjective measurement in comparison with the results using original depth maps and the state-of-the-art depth enhancement algorithms.
This paper presents a systematical approach to evaluate a system from both perspectives of algorithmic performance and complexity that could be considered as potential architecture cost. The complexity metrics include number of operations, data storage requirement, data transfer rate, and numbers of storage accessing; and these factors have the merits that are transparent to either algorithm or architecture. A case study of the coding tool, Backward View Synthesis Prediction (BVSP) in 3D-HEVC, is provided to demonstrate the evidence of the proposed approach. BVSP provides an effective BD-rate reduction through synthesizing a virtual view from depth information in removing inter-view redundancy. However, the coding performance and the complexity of BVSP would be distinct at various processing granularities. This paper tradeoffs between coding performance and algorithmic complexity via exploring various processing granularities; furthermore, an adaptive strategy that determines the processing granularity according to global depth distribution and local depth variation is also proposed to determine suitable processing granularity. This method decreases complexity but remains comparative coding performance. Consequently, in comparison with HTM-10.0r1, the experimental result shows no BD-rate increasing on average and the complexity of proposed method shows that the data transfer rate could be reduced 6.49% and 11.90% at average and best scenarios; in addition, the number of storage accessing also could be reduced 31.03% and 87.27% at average and best scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.