Benefiting from multi-view video plus depth and depth-image-based-rendering technologies, only limited views of a real 3-D scene need to be captured, compressed, and transmitted. However, the quality assessment of synthesized views is very challenging, since some new types of distortions, which are inherently different from the texture coding errors, are inevitably produced by view synthesis and depth map compression, and the corresponding original views (reference views) are usually not available. Thus the full-reference quality metrics cannot be used for synthesized views. In this paper, we propose a novel no-reference image quality assessment method for 3-D synthesized views (called NIQSV+). This blind metric can evaluate the quality of synthesized views by measuring the typical synthesis distortions: blurry regions, black holes, and stretching, with access to neither the reference image nor the depth map. To evaluate the performance of the proposed method, we compare it with four full-reference 3-D (synthesized view dedicated) metrics, five full-reference 2-D metrics, and three no-reference 2-D metrics. In terms of their correlations with subjective scores, our experimental results show that the proposed no-reference metric approaches the best of the state-of-the-art full reference and no-reference 3-D metrics; and outperforms the widely used no-reference and full-reference 2-D metrics significantly. In terms of its approximation of human ranking, the proposed metric achieves the best performance in the experimental test.
View synthesis brings geometric distortions which are not handled efficiently by existing image quality assessment metrics. Despite the widespread of 3-D technology and notably 3D television (3DTV) and free-viewpoints television (FTV), the field of view synthesis quality assessment has not yet been widely investigated and new quality metrics are required. In this study, we propose a new full-reference objective quality assessment metric: the View Synthesis Quality Assessment (VSQA) metric. Our method is dedicated to artifacts detection in synthesized viewpoints and aims to handle areas where disparity estimation may fail: thin objects, object borders, transparency, variations of illumination or color differences between left and right views, periodic objects… The key feature of the proposed method is the use of three visibility maps which characterize complexity in terms of textures, diversity of gradient orientations and presence of high contrast. Moreover, the VSQA metric can be defined as an extension of any existing 2D image quality assessment metric. Experimental tests have shown the effectiveness of the proposed method.
Depth-Image-Based Rendering (DIBR) is a fundamental technology in several 3D-related applications, such as Free viewpoint video (FVV), Virtual Reality (VR) and Augmented Reality (AR). However, new challenges have also been brought in assessing the quality of DIBR-synthesized views since this process induces some new types of distortions, which are inherently different from the distortion caused by video coding. In this paper, we present a new DIBR-synthesized image database with the associated subjective scores. We also test the performances of the state-of-the-art objective quality metrics on this database. This work focuses on the distortions only induced by different DIBR synthesis methods. Seven state-of-the-art DIBR algorithms, including interview synthesis and single view based synthesis methods, are considered in this database. The quality of synthesized views was assessed subjectively by 41 observers and objectively using 14 state-of-the-art objective metrics. Subjective test results show that the interview synthesis methods, having more input information, significantly outperform the single view based ones. Correlation results between the tested objective metrics and the subjective scores on this database reveal that further studies are still needed for a better objective quality metric dedicated to the DIBR-synthesized views.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.