Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Hybrid cameras, which include a high-resolution camera surrounded by multiple low-resolution cameras, offer advantages over light-field cameras by avoiding trade-offs between spatial and angular resolution. However, reconstructing light field from hybrid cameras data requires overcoming the dual challenge of enhancing both spatial and angular resolution. Challenges include matching images from cameras with different resolutions and creating a trainable real capture dataset for hybrid cameras data. This study proposes a parametric representation of the neural light field with decoupled light field data, based on the spatial-angular consistency of the light field. The proposed method decomposes the light field into angle and color information, parametrically represents these two components using coordinate-based neural network, and builds neural disparity field and neural central view field. Using the disparity propagation equation, we connect two modules in series, forming a differentiable network architecture. To separate angle and color information in the light field, we match discrete camera images with continuous images expressed by neural central view field using a two-step training strategy. Initially, neural central view field is trained, followed by matching discrete camera images with continuous images expressed by neural central view field to train neural disparity field. Experimental results demonstrate that proposed method effectively utilizes the spatial and angular information of hybrid cameras data, resulting in high-quality light field reconstruction.
Hybrid cameras, which include a high-resolution camera surrounded by multiple low-resolution cameras, offer advantages over light-field cameras by avoiding trade-offs between spatial and angular resolution. However, reconstructing light field from hybrid cameras data requires overcoming the dual challenge of enhancing both spatial and angular resolution. Challenges include matching images from cameras with different resolutions and creating a trainable real capture dataset for hybrid cameras data. This study proposes a parametric representation of the neural light field with decoupled light field data, based on the spatial-angular consistency of the light field. The proposed method decomposes the light field into angle and color information, parametrically represents these two components using coordinate-based neural network, and builds neural disparity field and neural central view field. Using the disparity propagation equation, we connect two modules in series, forming a differentiable network architecture. To separate angle and color information in the light field, we match discrete camera images with continuous images expressed by neural central view field using a two-step training strategy. Initially, neural central view field is trained, followed by matching discrete camera images with continuous images expressed by neural central view field to train neural disparity field. Experimental results demonstrate that proposed method effectively utilizes the spatial and angular information of hybrid cameras data, resulting in high-quality light field reconstruction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.