Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.
RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.
To generate indoor as-built building information models (AB BIMs) automatically and economically is a great technological challenge. Many approaches have been developed to address this problem in recent years, but it is far from being settled, particularly for the point cloud segmentation and the extraction of the relationship among different elements due to the complicated indoor environment. This is even more difficult for the low-quality point cloud generated by low-cost scanning equipment. This paper proposes an automatic as-built BIMs generation framework that transforms the noisy 3D point cloud produced by a low-cost RGB-D sensor (about 708 USD for data collection equipment, 379 USD for the Structure sensor and 329 USD for iPad) to the as-built BIMs, without any manual intervention. The experiment results show that the proposed method has competitive robustness and accuracy, compared to the high-quality Terrestrial Lidar System (TLS), with the element extraction accuracy of 100%, mean dimension reconstruction accuracy of 98.6% and mean area reconstruction accuracy of 93.6%. Also, the proposed framework makes the BIM generation workflows more efficient in both data collection and data processing. In the experiments, the time consumption of data collection for a typical room, with an area of 45–67 m 2 , is reduced to 4–6 min with an RGB-D sensor from 50–60 min with TLS. The processing time to generate BIM models is about half minutes automatically, from around 10 min with a conventional semi-manual method.
RGB-D cameras, which can be attached to any mobile device and work under different operation platforms (e.g., iOS, Android, and Windows), have great potential for indoor 3D modeling and navigation due to their low cost and small size. The main problems of RGB-D cameras for such applications are their range limitations and deteriorated depth accuracy. For example, for a 7-m range, the distance error of structure sensor (one type of RGB-D camera) reaches nearly 0.5 m. We propose a new calibration procedure for RGB-D sensors to improve the depth accuracy. First, the baseline between RGB and IR cameras is calibrated using the direct linear transform method. The distortions of the RGB and IR cameras and the IR projector are then calibrated using the newly proposed two-lens distortion model. Finally, the remaining depth systematic errors are calibrated using an empirical model. Compared to existing calibration methods, the new calibration method considers distortions from both the IR camera and projector and significantly improves the accuracy of far-range depth measurements. The experimental results show that the proposed calibration method can precisely calibrate the full range of the RGB-D sensor, up to 7 m, with an overall depth accuracy of 1.9%, compared to the 5.5% accuracy of the manufacturer's depth estimation. To demonstrate the significance of calibration in indoor mapping, the 3D point cloud of a room (4.5 m x 3.5 m) is generated using the RGB-D SLAM system. The accuracy of the 3D model with the proposed calibration method is approximately 1.5 cm, compared to 7.0 cm using the manufacturer's calibration parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.