Abstract-In order to address the current challenges associated with feature-based RGB-D SLAM, this paper puts forward a novel sparse direct localization algorithm. Contributions of the paper are manifold. Firstly, the proposed algorithm achieves rapid feature-point detection as well as camera pose estimation through a minimization strategy of the photometric error associated with image coupling. Secondly, a computational optimization scheme is put forward for the proposed algorithm such that key-frames are selected adaptively using a spatial domain framework which monitors the robot's motion in real-time, and applies a Nearest Neighbor algorithm towards loop closure detection. Finally, the proposed algorithm achieves robot pose estimation and optimization in real-time using a General Framework for Graph Optimization (g2o) strategy. The performance of the proposed algorithm is verified through live robotic experimental evaluation. The achieved results suggest that the scheme attains significantly high localization accuracies with low RMSE within a range of 25 meters. For scenarios where the camera remains fixed, RMSE reaches up to 1% within a range of 29.6 meters. Furthermore, the proposed scheme achieves localization speeds of up to 45 fps, demonstrating superior real-time capabilities, and addressing computational drawbacks associated with state-of-the-art.Keywords-simultaneous localization and mapping; sparse direct method; keyframe selection; kinect
I INTRODUCTIONOver the past decade, outdoor positioning has been rapidly developed and widely utilized based on satellite technologies, such as the Global Positioning System (GPS). However, we are more than 70% of the time indoors, therefore, indoor positioning technology offers great research and application value. Indoor environments are complex in nature, and it is therefore necessary to estimate the real-time position of mobile nodes from a series of measurement data. To the best of our knowledge, there are no working solutions with sufficient application potential thus far [1].In the field of robot localization, the Simultaneous Localization and Mapping (SLAM) algorithm, based on laser and vision has been widely relied upon. SLAM makes use of hardware sensors' (e.g., camera, laser) measurement data to establish the environmental mapping schemes for estimating the position of the mobile robot [2]. SLAM has an important research significance in robotic control, navigation and mission planning [3]. On the other hand, sensors are an important part of SLAM. Currently, sensors commonly used include laser radar, monocular/stereo/panoramic cameras, RGB-D cameras, Inertial Measurement Units and multiple sensor fusion schemes.Amongst them, RGB-D cameras (e.g., Kinect), which can simultaneously obtain the pixel colors and depth information, have been widely researched in recent years. Although the RGB-D SLAM system has evolved rapidly, it is still faced with some challenging issues such as a small Field of View (FOV) associated with the RGB-D camera as well as high comp...