In this paper, we propose a semantic simultaneous localization and mapping (SLAM) framework for rescue robots, and report its use in navigation tasks. Our framework can generate not only geometric maps in the form of dense point-clouds but also corresponding point-wise semantic labels generated by a semantic segmentation convolutional neural network (CNN). The semantic segmentation CNN is trained using our RGB-D dataset of the RoboCup Rescue-Robot-League (RRL) competition environment. With the help of semantic information, the rescue robot can identify different types of terrains in a complex environment, so as to avoid specific obstacles or to choose routes with better traversability. To reduce the segmentation noise, our approach utilizes depth images to perform filtering on the segmentation results of each frame. The overall semantic map is then further improved in the point-cloud voxels. By accumulating results of multiple frames in the voxels, semantic maps with consistent semantic labels are obtained. To show the advantage of having a semantic map of the environment, we report a case study of how the semantic map can be utilized in a navigation task to reduce the arrival time while ensuring safety. The experimental result shows that our semantic SLAM framework is capable of generating a dense semantic map for the complex RRL competition environment, with which the arrival time of the navigation time is effectively reduced.
Most autonomous mobile robots are often equipped with monocular cameras and 3D LiDARs to perform vital tasks such as localization and mapping. In this paper, we present a two-stage extrinsic calibration method as well as a hybrid-residual-based odometry approach for such camera-LiDAR systems. Our extrinsic calibration method can estimate the relative transformation between the camera and the LiDAR with high accuracy, allowing us to better register the image and the point cloud data. After the calibration, our hybrid-residual-based odometry can be used to provide real-time, accurate odometry estimates. Our approach exploits both direct and indirect image features. The sensor motions are estimated by jointly minimizing reprojection residuals and photometric residuals in a nonlinear optimization procedure. Experiments are conducted to show the accuracy and robustness of our extrinsic calibration and odometry algorithms using both public and self-owned real-world datasets. The results suggest that our calibration method can provide accurate extrinsic parameters estimation without using initial values, and our odometry approach can achieve competitive estimation accuracy and robustness.
In this paper, we address the problem of autonomous exploration in unknown environments for ground mobile robots with deep reinforcement learning (DRL). To effectively explore unknown environments, we construct an exploration graph considering historical trajectories, frontier waypoints, landmarks, and obstacles. Meanwhile, to take full advantage of the spatiotemporal feature and historical information in the autonomous exploration task, we propose a novel network called Spatiotemporal Neural Network on Graph (Graph-STNN). Specifically, the proposed Graph-STNN extracts the spatial feature using graph convolutional network (GCN) and the temporal feature using temporal convolutional network (TCN). Then, gated recurrent unit (GRU) is performed to synthesize the spatial feature, the temporal feature, and the historical state information into the current state feature. Combined with DRL, our Graph-STNN helps estimation of the optimal target point through extracted hybrid features. The simulation experiment shows that our approach is more effective than the GCN-based approach and the information entropy-based approach. Moreover, Graph-STNN also performs better generalization ability than GCN-based, information entropy-based, and random methods. Finally, we validate our approach on the simulation platform Stage with the actual robot model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.