Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Loop closure detection is crucial for simultaneous localization and mapping (SLAM), as it can effectively correct the accumulated errors. Complex scenarios put forward high requirements on the robustness of loop closure detection. Traditional feature-based loop closure detection methods often fail to meet these challenges. To solve this problem, this paper proposes a robust and efficient deep-learning-based loop closure detection approach. We employ MixVPR to extract global descriptors from keyframes and construct a global descriptor database. For local feature extraction, SuperPoint is utilized. Then, the constructed global descriptor database is used to find the loop frame candidates, and LightGlue is subsequently used to match the most similar loop frame and current keyframe with the local features. After matching, the relative pose can be computed. Our approach is first evaluated on several public datasets, and the results prove that our approach is highly robust to complex environments. The proposed approach is further validated on a real-world dataset collected by a drone and achieves accurate performance and shows good robustness in challenging conditions. Additionally, an analysis of time and memory costs is also conducted and proves that our approach can maintain accuracy and have satisfactory real-time performance as well.
Loop closure detection is crucial for simultaneous localization and mapping (SLAM), as it can effectively correct the accumulated errors. Complex scenarios put forward high requirements on the robustness of loop closure detection. Traditional feature-based loop closure detection methods often fail to meet these challenges. To solve this problem, this paper proposes a robust and efficient deep-learning-based loop closure detection approach. We employ MixVPR to extract global descriptors from keyframes and construct a global descriptor database. For local feature extraction, SuperPoint is utilized. Then, the constructed global descriptor database is used to find the loop frame candidates, and LightGlue is subsequently used to match the most similar loop frame and current keyframe with the local features. After matching, the relative pose can be computed. Our approach is first evaluated on several public datasets, and the results prove that our approach is highly robust to complex environments. The proposed approach is further validated on a real-world dataset collected by a drone and achieves accurate performance and shows good robustness in challenging conditions. Additionally, an analysis of time and memory costs is also conducted and proves that our approach can maintain accuracy and have satisfactory real-time performance as well.
Visual Simultaneous Localization and Mapping (VSLAM) is significant in unmanned driving, being is used to locate vehicles and create environmental maps, and provides a basis for navigation and decision making. However, in inevitable dark night environments, the SLAM system still suffers from a decline in robustness and accuracy. In this regard, this paper proposes a VSLAM pipeline called DarkSLAM. The pipeline comprises three modules: Camera Attribute Adjustment (CAA), Image Quality Enhancement (IQE), and Pose Estimation (PE). The CAA module carefully studies the strategies used for setting the camera parameters in low-illumination environments, thus improving the quality of the original images. The IQE module performs noise-suppressed image enhancement for the purpose of improving image contrast and texture details. In the PE module, a lightweight feature extraction network is constructed and performs pseudo-supervised training on low-light datasets to achieve efficient and robust data association to obtain the pose. Through experiments on low-light public datasets and real-world experiments in the dark, the necessity of the CAA and IQE modules and the parameter coupling between these modules are verified, and the feasibility of DarkSLAM is finally verified. In particular, the scene in the experiment NEU-4am has no artificial light (the illumination in this scene is between 0.01 and 0.08 lux) and the DarkSLAM achieved an accuracy of 5.2729 m at a distance of 1794.33 m.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.