Current light field angular super-resolution algorithm based on deep learning has excessive computation cost and low operational efficiency, for sequentially up-sampling on each lenslet region of the light field image. In this paper, we propose a novel convolutional neural network to fastly enhance the angular resolution, via wholesale up-sampling lenslet regions. Firstly, the network simultaneously extracts the angular information of all lenslet regions on the input light field image. Then, from the extracted angular information, four feature maps are predicted. Especially, the angular resolution of each feature map is the same as that of the input light field image. Finally, to enhance the angular resolution, we integrate four feature maps into one image, by referring to angular information arrangement in lenslet regions. The experimental results verify the effectiveness of our proposed method. We only need 11.95s to enhance(actually double) the angular resolution of one light field image with 2562×3724 pixels, which surpasses 20 times faster than the state-ofthe-art method. Meanwhile, our method also achieves average PSNR gains of 0.39 dB. INDEX TERMS Light-field(LF), angular super-resolution, convolutional neural network(CNN).
There is a trade-off between spatial resolution and angular resolution limits in light field applications; various targeted algorithms have been proposed to enhance angular resolution while ensuring high spatial resolution simultaneously, which is also called view synthesis. Among them, depth estimation-based methods can use only four corner views to reconstruct a novel view at an arbitrary location. However, depth estimation is a time-consuming process, and the quality of the reconstructed novel view is not only related to the number of the input views, but also the location of the input views. In this paper, we explore the relationship between different input view selections with the angular super-resolution reconstruction results. Different numbers and positions of input views are selected to compare the speed of super-resolution reconstruction and the quality of novel views. Experimental results show that the speed of the algorithm decreases with the increase of the input views for each novel view, and the quality of the novel view decreases with the increase of the distance from the input views. After comparison using two input views in the same line to reconstruct the novel views between them, fast and accurate light field view synthesis is achieved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.