In a light field-based free viewpoint video (LF-based FVV) system, effective sampling density (ESD) is defined as the number of rays per unit area of the scene that has been acquired and is selected in the rendering process for reconstructing an unknown ray. This paper extends the concept of ESD and shows that ESD is a tractable metric that quantifies the joint impact of the imperfections of LF acquisition and rendering. By deriving and analyzing ESD for the commonly used LF acquisition and rendering methods, it is shown that ESD is an effective indicator determined by system parameters and can be used to directly estimate output video distortion without access to the ground truth. This claim is verified by extensive numerical simulations and comparison to PSNR. Furthermore, an empirical relationship between the output distortion (in PSNR) and the calculated ESD is established to allow direct assessment of the overall video distortion without an actual implementation of the system. A small scale subjective user study is also conducted which indicates a correlation of 0.91 between ESD and perceived quality. (ESD) is defined as the number of rays per unit area of the scene that has been acquired and is selected in the rendering process for reconstructing an unknown ray. This paper extends the concept of ESD and shows that ESD is a tractable metric that quantifies the joint impact of the imperfections of LF acquisition and rendering. By deriving and analyzing ESD for the commonly used LF acquisition and rendering methods, it is shown that ESD is an effective indicator determined by system parameters and can be used to directly estimate output video distortion without access to the ground truth. This claim is verified by extensive numerical simulations and comparison to PSNR. Furthermore, an empirical relationship between the output distortion (in PSNR) and the calculated ESD is established to allow direct assessment of the overall video distortion without an actual implementation of the system. A small scale subjective user study is also conducted which indicates a correlation of 0.91 between ESD and perceived quality.
In a free viewpoint video system, the scene is captured by a number of cameras and it would be desirable to optimize the configuration of cameras, such as their location or orientation, to improve the rendering quality. This paper introduces a mathematical representation of the multi-camera geometry, called the correspondence field (CF), which can be used to quantify the suitability of a camera configuration for a given arrangement of objects in the scene. The correspondence field describes the spatial topology of the intersecting rays of cameras, arranged as a number of layers or surfaces in the field of view of cameras. The paper derives the topology of CF for certain camera arrangements and analyzes the impact of changes in camera location or orientation on this topology. It demonstrates that CF can be used to find the optimum camera configuration for a given objective. It also presents simulation results of this method using our light field simulator. In a free viewpoint video system, the scene is captured by a number of cameras and it would be desirable to optimize the configuration of cameras, such as their location or orientation, to improve the rendering quality. This paper introduces a mathematical representation of the multi camera geometry, called the correspondence field (CF), which can be used to quantify the suitability of a camera configuration for a given arrangement of objects in the scene. The correspondence field describes the spatial topology of the intersecting rays of cameras, arranged as a number of layers or surfaces in the field of view of cameras. The paper derives the topology of CF for certain camera arrangements and analyzes the impact of changes in camera location or orientation on this topology. It demonstrates that CF can be used to find the optimum camera configuration for a given objective. It also presents simulation results of this method using our light field simulator.
Light field rendering (LFR) is an active research area in computer vision and computer graphics. LFR plays a crucial role in free viewpoint video systems (FVV). Although several rendering algorithms have been suggested for LFR but the lack of appropriate datasets with known ground truth has prevented a comparison and evaluation study of LFR algorithms. In most of the LFR papers the method is applied to several test cases for validation and as a result, just a subjective visualized output is given. To overcome this problem, this paper presents a quantitative approach for comparison and evaluation of LFR algorithms. The core of the proposed methodology is a simulation model and a 3D engine. The platform produces the reference images and ground truth data for a given 3D model. Subsequently, data are injected to a comparison engine to compare synthesized images from light field engine with original images from simulation, generating objective results for evaluation. The methodology is highly flexible and efficient to automatically generate LFR datasets and objectively compare and analyze any subset of LFR methods for any given experiment design scheme. Five key rendering algorithms are evaluated with proposed methodology to validate it. Overall, it is shown that the proposed quantitative methodology could be used for LFR objective evaluation and comparison.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.