Three-dimensional (3D) vision plays an important role in industrial vision, where occlusion and reflection have made it challenging to reconstruct the entire application scene. In this paper, we present a novel 3D reconstruction framework to solve the occlusion and reflection reconstruction issues in complex scenes. A dual monocular structured light system is adopted to obtain the point cloud from different viewing angles to fill the missing points in the complex scenes. To enhance the efficiency of point cloud fusion, we create a decision map that is able to avoid the reconstruction of repeating regions of the left and right system. Additionally, a compensation method based on the decision map is proposed for reducing the reconstruction error of the dual monocular system in the fusion area. Gray-code and phase-shifting patterns are utilized to encode the complex scenes, while the phase-jumping problem at the phase boundary is avoided by designing a unique compensation function. Various experiments including accuracy evaluation, comparison with the traditional fusion algorithm, and the reconstruction of real complex scenes are conducted to validate the method’s accuracy and the robustness to the shiny surface and occlusion reconstruction problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.