In this paper, we explore a new way to accelerate and densify unstructured multi-view stereo (MVS). While many unstructured MVS algorithms have been proposed, we discover that the image-guided resizing can easily and significantly benefit their 3D reconstruction results in both efficiency and completeness. Therefore, we build our framework upon a novel selective joint bilateral upsampling and depth propagation strategy. First, we downsample the input unstructured images into lower resolution ones and perform the MVS calculation to efficiently obtain depth and normal maps from these resized pictures. Then, the proposed algorithm upsamples the normal maps with the guidance of input images, and jointly take them into consideration to recover the low-resolution depth maps into high resolution with geometry details simultaneously enriched. Finally by adaptively fusing the reconstructed depth and normal maps, we construct the final dense 3D scene. Quantitative results validate the efficiency and effectiveness of the proposed method.
In this paper, we propose a novel Multiview Stereo (MVS) method which can effectively estimate geometry in low‐textured regions. Conventional MVS algorithms predict geometry by performing dense correspondence estimation across multiple views under the constraint of epipolar geometry. As low‐textured regions contain less feature information for reliable matching, estimating geometry for low‐textured regions remains hard work for previous MVS methods. To address this issue, we propose an MVS method based on texture enhancement. By enhancing texture information for each input image via our multiscale bilateral decomposition and reconstruction algorithm, our method can estimate reliable geometry for low‐textured regions that are intractable for previous MVS methods. To densify the final output point cloud, we further propose a novel selective joint bilateral propagation filter, which can effectively propagate reliable geometry estimation to neighboring unpredicted regions. We validate the effectiveness of our method on the ETH3D benchmark. Quantitative and qualitative comparisons demonstrate that our method can significantly improve the quality of reconstruction in low‐textured regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.